How can out-of-tree sources be included in a Meson project? - embedded

Some background: I am using Meson for an embedded C project. I have it working (example), but it isn't very clean.
The specific problem I would like to solve is including an out-of-tree Board Support Package (BSP) - a tree of headers and C files that act as initialization and abstraction code for a particular platform.
Previously I have been copying headers out of a vendor-provided BSP into my project on an as-needed basis, which does work, but there are disadvantages to doing this, the most important being the lack of reproducibility. Additionally, it causes duplication of code and makes it difficult to track where a particular bug came from if the bug is in the BSP.
The ways I have tried are:
Use an option in meson_options.txt to tell Meson where the BSP is on disk via meson configure. The issue with this method is that Meson throws an error during setup because options cannot be set until after setup is complete, and so it cannot find the requisite directories and refuses to continue.
Use a subproject and repeat the above - this causes the same issue.
I would ideally like the end-user to be able to set the BSP path with meson configure, instead of having to ever edit the build description (the whole point of Meson is to be user friendly!).
Is this possible? If it is not possible, why, and are there alternatives/common practice ways of doing this that I should know about?

In your question, you state that
options cannot be set until after setup is complete
That is not true. You can pass any option you want during the meson setup, using the following syntax:
$ meson <build dir> -D<option>=<value>
So I think the first way you tried to implement your option was correct, you just need to tell the user to set it directly during setup.

Related

When I should use find_package

I am learning CMake, and I feel hard to understand when I should use find_package.
For separate compilation, we need to let the compiler knows where to find the header file, and this could be done by target_include_directories. For linking, we need to let the linker knows where the implementation is, and this could be done by target_link_libraries. It seems like that is all we need to do to compile a project. Could anyone explain why and when we should use find_package?
If a package you intend allows for the use of find_package, you should use it. If a package comes with a working configuration script, it'll encourage you to use the library the way it's intended to be used likely come with a simple way to add include directories and dependencies required.
When is it possible to use find_package?
There needs to be either a configuration script (<PackageName>Config.cmake or packagename-config.cmake) that gets installed with the package or find script (Find<PackageName>.cmake). The latter one in some cases even comes with the cmake installation instead of the package installed, see CMake find modules.
Should you create missing scripts yourself?
There are several benefits in creating a package configuration script yourself, even if a package doesn't come with a existing configuration or find script:
The scripts separate the information about libraries from the logic used to create your own target. The use of the 2 commands find_package and target_link_libraries is concise and any logic you may need to collect and apply information like dependencies, include directories, minimal versions of the C++ standard to use, ect. would probably take up much more space in your CMakeLists.txt files thus making it harder to understand.
If makes library used easy to replace. Basically all it takes to go with a different version of the same package would be to modify CMAKE_PREFIX_PATH, CMAKE_MODULE_PATH or package-specific <PackageName>_ROOT variables. If you ever want to try out different versions of the same library, this is incredibly useful.
The logic is reuseable. If you need to use the same functionality in a different project, it takes little effort to reuse the same logic. Even if a library is only used within a single project, but in multiple places, the use of find_package can help keeping the logic for "importing" a lib close to its use (see also the first bullet point).
There can be multiple versions of the same library with automatic selection of applicable ones. Note that this requires the use of a version file, but this file allows you to specify, if a version of the package is suitable for the current project. This allows for the checking the target architecture, ect. This is helpful when cross compiling or when providing both 32 and 64 bit versions of a library on Windows: If a version file indicates a mismatch the search for a suitable version simply continues with different paths instead of failing fatally when considering the first mismatch.
You will probably find CMake's guide on using dependencies helpful. It describes find_package and alternatives, and when each one is relevant / useful. Here's an excerpt from the section on find_package (italics added):
A package needed by the project may already be built and available at some location on the user's system. That package might have also been built by CMake, or it could have used a different build system entirely. It might even just be a collection of files that didn't need to be built at all. CMake provides the find_package() command for these scenarios. It searches well-known locations, along with additional hints and paths provided by the project or user. It also supports package components and packages being optional. Result variables are provided to allow the project to customize its own behavior according to whether the package or specific components were found.
find_package requires that the package provide CMake support in the form of specific files that describe the package's contents to CMake. Some library authors provide this support (the most desirable scenario for you, the package consumer), some don't but are prominent enough that CMake itself comes with such files for those packages, or in the worst case, there is no CMake support at all, in which case you can either do something to get the either of the previous good outcomes, or perform some kludges to get the job done (ie. define the targets yourself in your project's CMake config).

Variable interpolation in -D option

As a package manager for a Linux distribution, I want to install docs into a separate prefix. With CMake projects, the docs installation location is controlled by CMAKE_INSTALL_DOCDIR from GNUInstallDirs module. Unfortunately, unlike the other directory variables, this one contains the project name so I cannot just use cmake "-DCMAKE_INSTALL_DOCDIR=$myDocPrefix/doc".
With GNU Make, I would run make "DOCDIR=$myDocPrefix/doc/\$(PROJECT_NAME)" and have Make interpolate it but the documentation of CMake’s -D option does not mention interpolation and I understand that CMake uses much more complex system of cache entries where interpolation might be problematic (especially if the referenced variable is not yet in cache).
I could pass tailor-made CMAKE_INSTALL_DOCDIR to each CMake project but would be bothersome as I would have to do that in every package definition manually; being able to define configureCmakeProject function and have it take care of everything automatically would be better. When setting it manually, I would also want to make sure it matches the PROJECT_NAME of the respective CMake project – well, I could resign on that and just use $packageName from the package definition instead but keeping packages as close to upstream as possible is preferred.
Alternately, I could try to grep CMakeLists.txt for project command but that seems fragile and might still result in misalignments. I doubt it is possible to extract it using some CMake API since the project is not configured at the time and we actually need the value to configure the project.
Is there a way I can configure CMAKE_INSTALL_DOCDIR to use custom prefix but still keep the project name set by the CMake project?

Force CMake to install targets to architecture-specific directories?

I'm currently having this issue with the Google Protobuf Library, but it is a recurring problem and will likely occur with many if not all 3rd-party packages that I want to build and install from source.
I'm developing for Windows, and we need to be able to generate both 32-bit and 64-bit versions of our DLLs. It was relatively straightforward to get CMake to install our own modules to architecture-specific subdirectories, e.g. D:\libraries\bin\i686 and d:\libraries\lib\i686 (and sim. for bin). But I'm having trouble achieving the same thing with 3rd-party libraries such as Protobuf.
I could, of course, use distinct CMAKE_INSTALL_PREFIX and CMAKE_PREFIX_PATH combinations (e.g. D:\libraries-i686 and D:\libraries-x86_64, and will probably end up doing just that, but it bothers me that there doesn't seem to be a better alternative. The docs for find_package() clearly show that the search procedure does attempt architecture-specific search paths, so why do the CMake files of popular libraries not generally seem to support installing to architecture-specific subdirectories?
Or could it be that it is just a matter of setting the right CMAKE_XXX variable?
Thanks to #arrowd for pointing me in the right direction, I now have my answer, though it is not exactly what I had hoped for.
CMAKE_LIBRARY_OUTPUT_DIRECTORY and CMAKE_RUNTIME_OUTPUT_DIRECTORY, however, specify the build output directories, not the install directories. As it turns out though, there are variables for the install directories too, called CMAKE_INSTALL_BINDIR and CMAKE_INSTALL_LIBDIR - they are actually plainly visible (along with plenty more) in the cmake-gui interface when "Advanced" is checked.
I tried setting those two manually (to bin\i686 and lib\i686), and it works: the Protobuf INSTALL target copies the files where I wanted to have them, i.e. where the CMake script of my consumer project will find them in an architecture-safe manner.
I'm not sure how I feel about this - I would have preferred something like a CMAKE_INSTALL_ARCHITECTURE or CMAKE_ARCHITECTURE_SUBDIR variable that CMake would automatically append to relevant install paths. The solution above requires overriding defaults that I would prefer to leave untouched.
Under the circumstances, my fallback approach might still be the better option. That approach however requires that the choice of architecture be made very early on, typically when running the script that initializes the CMake-specific environment variables that will be passed to cmake when configuring build directories. And it's worse when using cmake-gui, which requires the user to set all directories manually.
In the end, I'm still undecided.

TFS Build ignores configured Code Analysis ruleset

I have a solution that is using an hybrid .csproj and project.json combination (for nuget management purposes). So basically the "project.json" file is working as a "packages.config" file with a floating version capability.
This solution is using a custom RuleSet that is being distributed via Package, and is imported automatically. On the dev machine, works without a problem.
At the build machine (that is, inside the machine itself, working as an user) the solution also compiles without a problem.
However, when a vNext build (is this the name for the new build system?) is queued, it ignores completely the custom ruleset and just uses the StyleCop one (that is also included), which gives a bunch of warnings. Said warnings should not appear as the Custom RuleSet basically suppresses those warnings (ie: Warning SA1404: Code analysis suppression must have justification,
Warning SA1124: Do not use regions, etc)
As far as I have checked, there is no setting to specify the ruleset, and this works with XAML Builds. What is different in this new build system that is causing this? Is there a way to force/specify the Code Analysis Rule Set from the definition?
Thanks in advance for any help or advice on the matter.
Update/Edit
After debugging back and forth with the wonderful help of jessehouwing I must include the following detail on my initial report (that I ignored as I did not know that it was influential):
I am using SonarQube Analysis on my build definition.
I initially did not mention it as I did not know that it replaces the Code Analysis at Build Time (and not only when it "analyzes", as I thought).
If you are using the SonarQube tasks
The SonarQube tasks generate a new Code Analysis Ruleset file on the fly and will overwrite the one configured for the projects. These rulesets will be used regardless of what you've previously specified.
There is a trick to the naming of the rulesets through which you can include your own overrides.
More information on the structure can be found in the blog post from the SonarQube/Visual Studio team. Basically when you Bind your solution to SonarQube it will generate 2 ruleset files. One which will be overwritten during build, the other containing your customizations.
There is a toolkit/SDK to generate a SonarQube plugin for custom analyzers which allow you to import your rules into SonarQube, so it will know what rules to activate for your project(s).
If you're not using SonarQube
Yes you can specify the ruleset you want to use and force Code Analysis to run. It requires a couple of MsBuild arguments:
/p:RunCodeAnalysis=true /p:CodeAnalysisRuleset="PathToRuleset"
Or you can use my MsBuild helper extension to configure these settings with the help of a UI template:

How to build several configurations at once with CMake?

E.g. how should I build release and debug version at the same time? I guess the answer make use of cache variables and some kind of "collection" of them.
Is it common way to get configuration params from cache params, isn'it ? If the answer is yes, how should I use several "collections" of them in a best way ?
Thanks a lot!
You don't specify the platform you are talking about. The Makefiles based generators will only build one configuration at a time, and the normal way to build several configurations is to use separate build trees, e.g. one for 64-bit Linux on Intel, one for 32-bit Windows, etc. Most CMake projects advise out of source builds, and assuming you wrote your CMakeLists files correctly you could have ~/src/YourProject, and ~/build/YourProject-Release, ~/build/YourProject-Debug.
This is the advised way to do it, assuming your source tree does not have any CMakeCache.txt etc in it. You can then run cmake -DCMAKE_BUILD_TYPE:STRING=Debug ~/src/YourProject in the debug directory, and similar for the release. This has the advantage that you can point dependent projects at the appropriate configuration.
The Boost CMake project has also explored building all configurations in the same build tree using library name mangling to differentiate. This may be worth looking at if you must build all configurations in the same build tree.
(for fellow googlers)
Be careful of not confusing build types and build configurations.
If you really mean "build types" such as debug and release and want to build them at the same time, then Cmake FAQ gives an answer : How can I build multiple modes without switching
Basically it involves using several out-of-source builds.