How to use different tool chains - cmake

In a project where some targets are to be build and run on the build platform and other targets are to be build for a cross platform; what options do we have, when using cmake?
Currently I use CMAKE_BUILD_TYPE to define tool chain, build type and platform (for example -D CMAKE_BUILD_TYPE=arm_debug). In one place in the build, I switch tools (compilers, linke etc.), command line flags, libraries etc. according to the value of CMAKE_BUILD_TYPE. For every build type, I create a build directory.
This approach has it's drawbacks: multiple build directories and no easy way to depend one target from one build type on a target in an other build type (some kind of precompiler needed on the build platform by the build for the cross platform for example).
As currently every build targets has a single tool chain to be used I would love to associate a target with a target platform / tools set. This implies that some libraries have to be build for more than one target platform with different tool sets.

The 'one build type and platform per CMake run' limitation is fundamental and I would strongly advise against trying to work around it.
The proper solution here seems to me to split the build into several stages. In particular, for the scenario where a target from one build type depends on a target from another build type, you should not try to have those two targets in the same CMake project. Proper modularization is key here. Effective use of CMake's include command can help to avoid code duplication in the build scripts.
The big drawback of this approach is that the build process becomes more complex, as you now have several interdependent CMake projects that need to be built in a certain order with specific configurations. Although you already seem to be way beyond the point where you can build your whole system with a single command anyway. CMake can help manage this complexity with tools like ExternalProject, that allows you to build a CMake project from within another. Depending on your particular setup, a non-CMake layer written in your favorite scripting language might also be a viable alternative for ensuring that the different subprojects get built in the correct order.
The sad truth is though that complex build setups are hard to manage. CMake does a great job at providing a number of tools for tackling this complexity but it cannot magically make the problem easier. Most of the limitations that CMake imposes on its user are there for a reason, namely that things would be even harder if you tried to work without them.

Related

How to use kotlin.parallel.tasks.in.project=true

Long ago, when Kotlin version 1.3.20 was released (https://blog.jetbrains.com/kotlin/2019/01/kotlin-1-3-20-released/), the ability to build in parallel using Gradle Workers was added. Simply adding the kotlin.parallel.tasks.in.project = true setting does not give any gain in build speed. As far as I understand, this parameter can be useful only if I have several folders with classes independent of each other within the same project. I saw the use of this setting when assembling the gradle itself, but did not see anywhere that separate source sets were created for each folder.
Could you provide examples of how to correctly describe the build process in build.gradle.kts so that mentioned option is really used and gives an increase in build speed when there are several processor cores.
As of yet, there's no simple way to parallelize compilation of a single source set containing Kotlin code (like just the main sources), as the compiler has to analyze all of the sources together and resolve cross-references within the source set.
By default, without any additional options, Gradle runs compilation of Kotlin sources in parallel only in different subprojects. The option kotlin.parallel.tasks.in.project also allows Gradle to run parallel compilation tasks in one project, but that only works for different source sets (that don't depend on each other!), or different targets.
For example, in multiplatform projects, if you have several targets, kotlin.parallel.tasks.in.project allows Gradle to build the compilation outputs (JVM/Android classes, *.js, Kotlin/Native *.klibs and binaries) in parallel. In Android projects, if you build multiple product variants, this option also allows parallel Kotlin compilation for those variants.
In simpler project layouts, where you only have main and test source sets and a single target, there's no way to improve Kotlin compilation speed by using multiple processors, unless you split one project into several projects.

Conan.io use on embeddeds Software development

Please allow me two questions to the use in Conan.io in our environment:
We are developing automotive embedded software. Usually, this includes integration of COTS libraries, most of all for communication and OS like AUTOSAR. These are provided in source code. Typical uC are Renesas RH850, RL78, or similar devices from NXP, Cypress, Infinion, and so on. We use gnumake (MinGW), Jenkins for CI, and have our own EclipseCDT distribution as standardized IDE.
My first question:
Those 3rd party components are usually full of conditional compilation to do a proper compile-time configuration. With this approach, the code and so the resulting binaries are optimized, both in size and in run-time behavior.
Besides those components, we of course have internal reusable components for different purposes. The Compile-time configuration here is not as heavy as in the above example, but still present.
In one sentence: we have a lot of compile-time configuration - what could be a good approach to set up a JFrog / Conan based environment? Stay with the sources in every project?
XRef with Conan:
Is there a way to maintain cross-reference information coming from Conan? I am looking for something like "Project xxx is using Library lll Version vvv". In that way, we would be able to automatically identify other "users" of a library in case a problem is detected.
Thanks a lot,
Stefan
Conan recipes are based on python and thus are very flexible, being able to implement any conditional logic that you might need.
As an example, the libxslt recipe in ConanCenter contains something like:
def build(self):
self._patch_sources()
if self._is_msvc:
self._build_windows()
else:
self._build_with_configure()
And following this example, the autotools build contains code like:
def _build_with_configure(self):
env_build = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
full_install_subfolder = tools.unix_path(self.package_folder)
# fix rpath
if self.settings.os == "Macos":
tools.replace_in_file(os.path.join(self._full_source_subfolder, "configure"), r"-install_name \$rpath/", "-install_name ")
configure_args = ['--with-python=no', '--prefix=%s' % full_install_subfolder]
if self.options.shared:
configure_args.extend(['--enable-shared', '--disable-static'])
else:
configure_args.extend(['--enable-static', '--disable-shared'])
So Conan is able to implement any compile time configuration. That doesn't mean that you need to build always from sources. The parametrization of the build is basically:
Settings: for "project wide" configuration, like the OS or the architecture. Settings values typically have the same value for all dependencies
Options: for package specific configuration, like a library being static or shared. Every package can have its own value, different to other packages.
You can implement the variability model for a package with settings and options, build the most used binaries. When a given variant is requested, Conan will error saying there is not precompiled binary for that configuration. Users can specify --build=missing to build those from sources.

How to get cmake to find size of type in third-party header mpi.h?

I am working on a free-software project which involves high performance computing and the MPI library.
In my code, I need to know the size of the MPI_Offset type, which is defined in mpi.h.
Normally such projects would be build using autotools and this problem would be easily solved. But for my sins, I am working with a CMake build and I can't find any way to perform this simple task. But there must be a way to do - it is commonly done on autotools projects, so I assume it is also possible in CMake.
When I use:
check_type_size("MPI_Offset" SIZEOF_MPI_OFFSET)
It fails, because mpi.h is not included in the generated C code.
Is there a way to tell check_type_size() to include mpi.h?
This is done via CMAKE_EXTRA_INCLUDE_FILES:
INCLUDE (CheckTypeSize)
find_package(MPI)
include_directories(SYSTEM ${MPI_INCLUDE_PATH})
SET(CMAKE_EXTRA_INCLUDE_FILES "mpi.h")
check_type_size("MPI_Offset" SIZEOF_MPI_OFFSET)
SET(CMAKE_EXTRA_INCLUDE_FILES)
It may be more common to write platform checks with autotools, so here is some more information on how to write platform checks with CMake.
On a personal note, while CMake is certainly not the most pleasant exercise, for me autotools is reserved for the capital sins. It is really hard to me to defend CMake, but in this instance, it is even documented. Naturally, setting a separate "variable" that you even have to reset after the fact, instead of just passing it as a parameter, is clearly conforming to the surprising "design principles" of CMake.

Getting imported targets through `find_package`?

The CMake manual of Qt 5 uses find_package and says:
Imported targets are created for each Qt module. Imported target names should be preferred instead of using a variable like Qt5<Module>_LIBRARIES in CMake commands such as target_link_libraries.
Is it special for Qt or does find_package generate imported targets for all libraries? The documentation of find_package in CMake 3.0 says:
When the package is found package-specific information is provided through variables and Imported Targets documented by the package itself.
And the manual for cmake-packages says:
The result of using find_package is either a set of IMPORTED targets, or a set of variables corresponding to build-relevant information.
But I did not see another FindXXX.cmake-script where the documentation says that a imported target is created.
find_package is a two-headed beast these days:
CMake provides direct support for two forms of packages, Config-file Packages
and Find-module Packages
Source
Now, what does that actually mean?
Find-module packages are the ones you are probably most familiar with. They execute a script of CMake code (such as this one) that does a bunch of calls to functions like find_library and find_path to figure out where to locate a library.
The big advantage of this approach is that it is extremely generic. As long as there is something on the filesystem, we can find it. The big downside is that it often provides little more information than the physical location of that something. That is, the result of a find-module operation is typically just a bunch of filesystem paths. This means that modelling stuff like transitive dependencies or multiple build configurations is rather difficult.
This becomes especially painful if the thing you are trying to find has itself been built with CMake. In that case, you already have a bunch of stuff modeled in your build scripts, which you now need to painstakingly reconstruct for the find script, so that it becomes available to downstream projects.
This is where config-file packages shine. Unlike find-modules, the result of running the script is not just a bunch of paths, but it instead creates fully functional CMake targets. To the dependent project it looks like the dependencies have been built as part of that same project.
This allows to transport much more information in a very convenient way. The obvious downside is that config-file scripts are much more complex than find-scripts. Hence you do not want to write them yourself, but have CMake generate them for you. Or rather have the dependency provide a config-file as part of its deployment which you can then simply load with a find_package call. And that is exactly what Qt5 does.
This also means, if your own project is a library, consider generating a config file as part of the build process. It's not the most straightforward feature of CMake, but the results are pretty powerful.
Here is a quick comparison of how the two approaches typically look like in CMake code:
Find-module style
find_package(foo)
target_link_libraries(bar ${FOO_LIBRARIES})
target_include_directories(bar ${FOO_INCLUDE_DIR})
# [...] potentially lots of other stuff that has to be set manually
Config-file style
find_package(foo)
target_link_libraries(bar foo)
# magic!
tl;dr: Always prefer config-file packages if the dependency provides them. If not, use a find-script instead.
Actually there is no "magic" with results of find_package: this command just searches appropriate FindXXX.cmake script and executes it.
If Find script sets XXX_LIBRARY variable, then caller can use this variable.
If Find script creates imported targets, then caller can use these targets.
If Find script neither sets XXX_LIBRARY variable nor creates imported targets ... well, then usage of the script is somehow different.
Documentation for find_package describes usual usage of Find scripts. But in any case you need to consult documentation about concrete script (this documentation is normally contained in the script itself).

In cmake, what is a "project"?

This question is about the project command and, by extension, what the concept of a project means in cmake. I genuinely don't understand what a project is, and how it differs from a target (which I do understand, I think).
I had a look at the cmake documentation for the project command, and it says that the project command does this:
Set a name, version, and enable languages for the entire project.
It should go without saying that using the word project to define project is less than helpful.
Nowhere on the page does it seem to explain what a project actually is (it goes through some of the things the command does, but doesn't say whether that list is exclusive or not). The cmake.org examples take us through a basic build setup, and while it uses the project keyword it also doesn't explain what it does or means, at least not as far as I can tell.
What is a project? And what does the project command do?
A project logically groups a number of targets (that is, libraries, executables and custom build steps) into a self-contained collection that can be built on its own.
In practice that means, if you have a project command in a CMakeLists.txt, you should be able to run CMake from that file and the generator should produce something that is buildable. In most codebases, you will only have a single project per build.
Note however that you may nest multiple projects. A top-level project may include a subdirectory which is in turn another self-contained project. In this case, the project command introduces additional scoping for certain values. For example, the PROJECT_BINARY_DIR variable will always point to the root binary directory of the current project. Compare this with CMAKE_BINARY_DIR, which always points to the binary directory of the top-level project. Also note that certain generators may generate additional files for projects. For example, the Visual Studio generators will create a .sln solution file for each subproject.
Use sub-projects if your codebase is very complex and you need users to be able to build certain components in isolation. This gives you a very powerful mechanism for structuring the build system. Due to the increased coding and maintenance overhead required to make the several sub-projects truly self-contained, I would advise to only go down that road if you have a real use case for it. Splitting the codebase into different targets should always be the preferred mechanism for structuring the build, while sub-projects should be reserved for those rare cases where you really need to make a subset of targets self-contained.