Reuse identical parts across multiple buildouts - buildout

Is there a way to reuse parts across multiple buldouts? I've got several tools that I'd like to add to the buildout that don't change across buildouts. Here is an example case:
Configured global buildout options such that download-cache=~/.buildout/downloads
Buildout A needs cmake 2.8.4
Buildout B needs cmake 2.8.4
One way to do this is to put the following in each of their configurations
[cmake]
recipe = zc.recipe.cmmi
url = http://www.cmake.org/files/v2.8/cmake-2.8.4.tar.gz
Since this doesn't change across the two buildouts, it would save more disc space if this could set up similarly to how eggs are cached. However, I cannot figure out a good way to do this. I don't think buildout was designed with this in mind.
Ideas:
Is it possible to redistribute the cmake tarball as a python egg? Perhaps compile the tarball for different platforms and redistribute the binaries inside eggs?
Another idea would to be have a recipe that can handle this kind of behavior. Maybe a recipe that wraps around other recipes and checks to see if the part is installed globally. Perhaps it would look like this:
[cmake]
recipe = my.recipe.reusuableparts
real-recipe = zc.recipe.cmmi
url = http://www.cmake.org/files/v2.8/cmake-2.8.4.tar.gz

The zc.recipe.cmmi recipe supports the usecase directly, but it's under-documented (the egg does contain full documentation). Simply set the shared option to the directory of your choice:
[cmake]
recipe = zc.recipe.cmmi
url = http://www.cmake.org/files/v2.8/cmake-2.8.4.tar.gz
shared = ~/shared-buildout-cmmi-builds/cmake/
or simply set it to True to put it in your buildout download cache:
[cmake]
recipe = zc.recipe.cmmi
url = http://www.cmake.org/files/v2.8/cmake-2.8.4.tar.gz
shared = True
It's up to individual recipes to support such sharing behaviour. I don't think a wrapping recipe is going to be easy seeing as buildout recipes can pretty much do anything.

Related

Yocto conditional debug flag addition

Context
I'm currently working on yocto on a build for raspberrypi 4 (not really relevant). I would like to create two recipes for two different images:
An image with the bare minimum to run fast. Mostly for production phase
An image with a lot of extra tools and debug options
Basically the core idea is pretty easy to sort out, I just create two image files, the first one requires the basic recipes for my image, the second one requires the first file and requires a few additional modules (such as apt, valgrind, gdb, etc...).
Question
Now here is the issue: Most of my recipes are built with cmake. I would like to add extra flags (especially -g) when building for the debug image.
What is the best way or most common way to do that? Is there a good practice?
To me this is a little tricky due to the fact that bitbake does not supports the if statement.
My solution
For now here is the solution I found: In the image recipe for the dev file add a variable to define the debug mode:
DEBUG_MODE = "1"
Then in every file that compiles a module with cmake conditionally add the flag (-g for example):
TARGET_CXXFLAGS += "${#'-g' if d.getVar('DEBUG_MODE') == '1' else ''}"
However I'm not sure how this apply to dependencies, and I'm not sure if it is the best way to do it. It seems like a lot of duplicate code and tedious to maintain. Is there anything easier to manage?
Currently you set
DEBUG_MODE = "1"
but then check for DEV_MODE in your if condition, so this approach should not work at all.
Suggestion: Set -g globally
One step could be to define TARGET_CPPFLAGS in a *.conf for your debug-image to include -g.
TARGET_CPPFLAGS += "-g"
That way there should be no need to add it on recipe base, but can be done globally. Also when it is added in a conf only used for your debug image, there is no need for any if/else.
I would also use TARGET_CPPFLAGS, as those are then used for c and c++. But bear in min that some recipes might unset the compiler flags again.

Conan.io use on embeddeds Software development

Please allow me two questions to the use in Conan.io in our environment:
We are developing automotive embedded software. Usually, this includes integration of COTS libraries, most of all for communication and OS like AUTOSAR. These are provided in source code. Typical uC are Renesas RH850, RL78, or similar devices from NXP, Cypress, Infinion, and so on. We use gnumake (MinGW), Jenkins for CI, and have our own EclipseCDT distribution as standardized IDE.
My first question:
Those 3rd party components are usually full of conditional compilation to do a proper compile-time configuration. With this approach, the code and so the resulting binaries are optimized, both in size and in run-time behavior.
Besides those components, we of course have internal reusable components for different purposes. The Compile-time configuration here is not as heavy as in the above example, but still present.
In one sentence: we have a lot of compile-time configuration - what could be a good approach to set up a JFrog / Conan based environment? Stay with the sources in every project?
XRef with Conan:
Is there a way to maintain cross-reference information coming from Conan? I am looking for something like "Project xxx is using Library lll Version vvv". In that way, we would be able to automatically identify other "users" of a library in case a problem is detected.
Thanks a lot,
Stefan
Conan recipes are based on python and thus are very flexible, being able to implement any conditional logic that you might need.
As an example, the libxslt recipe in ConanCenter contains something like:
def build(self):
self._patch_sources()
if self._is_msvc:
self._build_windows()
else:
self._build_with_configure()
And following this example, the autotools build contains code like:
def _build_with_configure(self):
env_build = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
full_install_subfolder = tools.unix_path(self.package_folder)
# fix rpath
if self.settings.os == "Macos":
tools.replace_in_file(os.path.join(self._full_source_subfolder, "configure"), r"-install_name \$rpath/", "-install_name ")
configure_args = ['--with-python=no', '--prefix=%s' % full_install_subfolder]
if self.options.shared:
configure_args.extend(['--enable-shared', '--disable-static'])
else:
configure_args.extend(['--enable-static', '--disable-shared'])
So Conan is able to implement any compile time configuration. That doesn't mean that you need to build always from sources. The parametrization of the build is basically:
Settings: for "project wide" configuration, like the OS or the architecture. Settings values typically have the same value for all dependencies
Options: for package specific configuration, like a library being static or shared. Every package can have its own value, different to other packages.
You can implement the variability model for a package with settings and options, build the most used binaries. When a given variant is requested, Conan will error saying there is not precompiled binary for that configuration. Users can specify --build=missing to build those from sources.

CMake: How to tell where transitive dependency is coming from?

I'm in the process of rewriting a legacy CMake setup to use modern features like automatic dependency propagation. (i.e. using things like target_include_directories(<target> PUBLIC <dir>) instead of include_directories(<dir>).) Currently, we manually handle all project dependency information by setting a bunch of global directory properties.
In my testing, I've found a few examples where a target in the new build will link to a library that the old build would not. I'm not linking to it explicitly, so I know this is coming from the target's dependencies, but in order to find which one(s) I have to recursively look through all of the project's CMakeLists.txts, following up the dependency hierarchy until I find one that pulls in the library in question. We have dozens of libraries so this is not a trivial process.
Does CMake provide any way to see, for each target, which of its dependencies were added explicitly, and which ones were propagated through transitive dependencies?
It looks like the --graphviz output does show this distinction, so clearly CMake knows the context internally. However, I'd like to write a tree-like script to show dependency information on the command line, and parsing Graphviz files sounds like both a nightmare and a hack.
As far as I can tell, cmake-file-api does not include this information. I thought the codemodel/target/dependencies field might work, but it lists both local and transitive dependencies mixed together. And the backtrace field of each dependency only ties back to the add_executable/add_library call for the current target.
You can parse dot file generated by graphviz and extract details which you want. Below is sample python script to do that.
import pydot
import sys
graph = pydot.graph_from_dot_file(sys.argv[1])
result = {}
for g in graph:
# print(g)
for node in g.get_node_list():
if node.get("label") != None:
result[node.get("label")] = []
for edge in g.get_edges():
result[g.get_node(edge.get_source())[0].get("label")].append(g.get_node(edge.get_destination())[0].get("label"))
for r in result:
print(r+":"+",".join(result[r]))
You can also add this script to run from cmake as custom target, so you can call it from you build system. You can find sample cmake project here

where is the list of names that cmake reserves?

Everything is in the title, but for more context informations:
I am creating a library, where all components are independent (it's only because it's easier to manage 1 git repo, really).
In that library's root folder, I have 1 sub-folder for each part of the library's components, with exactly 3 "interesting folders" (src,tests,include/components_name). I have hardcoded those folders in a foreach loop so that all actions will be done for all modules by default.
The problem seems to be that, one of the modules is named "option_parser" which is, indeed, relatively generic, and also seems to be "reserved" by cmake, and same for everything derived from it. I've tried "option_parser_test", "option_parser_tests", and other random names based on "option_parser_" root.
So, here is my question: where I can learn how to avoid names that cmake reserves?
And how can I affect them anyway to my binaries (because, I feel like it's stupid to change a project's name because of a build system. Might be a strong enough reason to switch it.)
It's really quite simple. Use these three commands to see all reserved words:
cmake --help-command-list
cmake --help-variable-list
cmake --help-property-list
The answer of Cinder Biscuits above should probably already help you.
Additionally, you should probably read CMake's own documentation regarding the CMake language and in particular the note in the "Variables" section:
Note: CMake reserves identifiers that:
begin with CMAKE_ (upper-, lower-, or mixed-case), or
begin with _CMAKE_ (upper-, lower-, or mixed-case), or
begin with _ followed by the name of any CMake Command.

Getting imported targets through `find_package`?

The CMake manual of Qt 5 uses find_package and says:
Imported targets are created for each Qt module. Imported target names should be preferred instead of using a variable like Qt5<Module>_LIBRARIES in CMake commands such as target_link_libraries.
Is it special for Qt or does find_package generate imported targets for all libraries? The documentation of find_package in CMake 3.0 says:
When the package is found package-specific information is provided through variables and Imported Targets documented by the package itself.
And the manual for cmake-packages says:
The result of using find_package is either a set of IMPORTED targets, or a set of variables corresponding to build-relevant information.
But I did not see another FindXXX.cmake-script where the documentation says that a imported target is created.
find_package is a two-headed beast these days:
CMake provides direct support for two forms of packages, Config-file Packages
and Find-module Packages
Source
Now, what does that actually mean?
Find-module packages are the ones you are probably most familiar with. They execute a script of CMake code (such as this one) that does a bunch of calls to functions like find_library and find_path to figure out where to locate a library.
The big advantage of this approach is that it is extremely generic. As long as there is something on the filesystem, we can find it. The big downside is that it often provides little more information than the physical location of that something. That is, the result of a find-module operation is typically just a bunch of filesystem paths. This means that modelling stuff like transitive dependencies or multiple build configurations is rather difficult.
This becomes especially painful if the thing you are trying to find has itself been built with CMake. In that case, you already have a bunch of stuff modeled in your build scripts, which you now need to painstakingly reconstruct for the find script, so that it becomes available to downstream projects.
This is where config-file packages shine. Unlike find-modules, the result of running the script is not just a bunch of paths, but it instead creates fully functional CMake targets. To the dependent project it looks like the dependencies have been built as part of that same project.
This allows to transport much more information in a very convenient way. The obvious downside is that config-file scripts are much more complex than find-scripts. Hence you do not want to write them yourself, but have CMake generate them for you. Or rather have the dependency provide a config-file as part of its deployment which you can then simply load with a find_package call. And that is exactly what Qt5 does.
This also means, if your own project is a library, consider generating a config file as part of the build process. It's not the most straightforward feature of CMake, but the results are pretty powerful.
Here is a quick comparison of how the two approaches typically look like in CMake code:
Find-module style
find_package(foo)
target_link_libraries(bar ${FOO_LIBRARIES})
target_include_directories(bar ${FOO_INCLUDE_DIR})
# [...] potentially lots of other stuff that has to be set manually
Config-file style
find_package(foo)
target_link_libraries(bar foo)
# magic!
tl;dr: Always prefer config-file packages if the dependency provides them. If not, use a find-script instead.
Actually there is no "magic" with results of find_package: this command just searches appropriate FindXXX.cmake script and executes it.
If Find script sets XXX_LIBRARY variable, then caller can use this variable.
If Find script creates imported targets, then caller can use these targets.
If Find script neither sets XXX_LIBRARY variable nor creates imported targets ... well, then usage of the script is somehow different.
Documentation for find_package describes usual usage of Find scripts. But in any case you need to consult documentation about concrete script (this documentation is normally contained in the script itself).