Please allow me two questions to the use in Conan.io in our environment:
We are developing automotive embedded software. Usually, this includes integration of COTS libraries, most of all for communication and OS like AUTOSAR. These are provided in source code. Typical uC are Renesas RH850, RL78, or similar devices from NXP, Cypress, Infinion, and so on. We use gnumake (MinGW), Jenkins for CI, and have our own EclipseCDT distribution as standardized IDE.
My first question:
Those 3rd party components are usually full of conditional compilation to do a proper compile-time configuration. With this approach, the code and so the resulting binaries are optimized, both in size and in run-time behavior.
Besides those components, we of course have internal reusable components for different purposes. The Compile-time configuration here is not as heavy as in the above example, but still present.
In one sentence: we have a lot of compile-time configuration - what could be a good approach to set up a JFrog / Conan based environment? Stay with the sources in every project?
XRef with Conan:
Is there a way to maintain cross-reference information coming from Conan? I am looking for something like "Project xxx is using Library lll Version vvv". In that way, we would be able to automatically identify other "users" of a library in case a problem is detected.
Thanks a lot,
Stefan
Conan recipes are based on python and thus are very flexible, being able to implement any conditional logic that you might need.
As an example, the libxslt recipe in ConanCenter contains something like:
def build(self):
self._patch_sources()
if self._is_msvc:
self._build_windows()
else:
self._build_with_configure()
And following this example, the autotools build contains code like:
def _build_with_configure(self):
env_build = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
full_install_subfolder = tools.unix_path(self.package_folder)
# fix rpath
if self.settings.os == "Macos":
tools.replace_in_file(os.path.join(self._full_source_subfolder, "configure"), r"-install_name \$rpath/", "-install_name ")
configure_args = ['--with-python=no', '--prefix=%s' % full_install_subfolder]
if self.options.shared:
configure_args.extend(['--enable-shared', '--disable-static'])
else:
configure_args.extend(['--enable-static', '--disable-shared'])
So Conan is able to implement any compile time configuration. That doesn't mean that you need to build always from sources. The parametrization of the build is basically:
Settings: for "project wide" configuration, like the OS or the architecture. Settings values typically have the same value for all dependencies
Options: for package specific configuration, like a library being static or shared. Every package can have its own value, different to other packages.
You can implement the variability model for a package with settings and options, build the most used binaries. When a given variant is requested, Conan will error saying there is not precompiled binary for that configuration. Users can specify --build=missing to build those from sources.
Related
Context
I'd like to run a Qt application on an IM6 based system with a Yocto built image. I already have the meta-qt5 layer in my project. I started with a simple Qt5 application that only neededs the following modules:
QT += core gui widgets
All I have to do is make sure my bitbake recipe has DEPENDS += qtbase and is based on the qmake class with: inherit qmake5. And it builds and runs on the target! No problem
Problem
Now I'd like to add another Qt5 application, this time with the following modules and one plugin:
QT += core gui widgets quick qml svg xml network charts
QTPLUGIN += qsvg
Unfortunately, I'm not able to simple add these to my DEPENDS variable and get it to work. But googling around for how to add support reveals what seems to be a sprawling assortment of solutions. I'll enumerate what I've found here:
I need to add inherit populate_sdk_qt5 to instruct bitbake to build the recipe against the SDK that contains the libraries for the modules (see here)
I need to add IMAGE_FEATURES += dev-pkgs to the recipe (see here)
I need to modify local.conf for the system, and add lines like: PACKAGECONFIG_append_pn_qttools = "..." and also PACKAGECONFIG_append_pn-qtbase = "..."
I need to modify layer.conf in my layer and add things like IMAGE_INSTALL_append = "qtbase qtquick ..." (slide 53 here)
I need to manually patch the Qt5 toolchain for charts? (see here)
I need to compile my image using bitbake <target> -c populate_sdk? (see here again)
At this point, I'm really unsure what exactly is going on. It seems we're modifying the recipe, the layer configuration file, the distribution configuration file, and even meta-Qt layer files. I know that I fundamentally need to do a few things:
Compile the application against the Qt5 SDK
Compile the needed plugins + modules for the target architecture
Make sure the appropriate binaries (images) are copied to the target.
But it has become a bit unclear about what does what. I know that IMAGE_INSTALL_append adds images to the target, but I am lost with what is the proper way to add the modules. I don't want to go about randomly adding lines, so I'm hoping someone can clear up a bit what exactly I need to be looking at in order to add support for a Qt5 module for an application.
There are different problems stated, your preferred way seems to the directly building a recipe, not using the toolchain. So, you need the image to have the tools you need.
First of all qtsvg is not on Qt Base, it is a module so you need it installed.
Add Qt SVG support
You need Qt SVG on target in order to run your App. Either to your image or to local.conf you need
IMAGE_INSTALL_append = " qtsvg"
As a fact, your app's recipe needs Qt QSVG so you need to DEPEND on it on your app's recipe like this:
DEPENDS = "qtsvg"
here qtsvg is the name of the other recipe, namely qtsvg_git.bb, not to be confused with the identically named qtsvg plugin. And it will get pulled automatically on build time on your development machine, otherwise it won't even build.
Remember yocto creates a simulated image tree on the TMP folder in order to build (yes it does it for each recipe), so you must describe what your recipe needs or it won't find it and your build will fail.
You can also check the recipe for a Qt5 example as it also has DEPENDS and RDEPENDS. And you can get more info on those in here.
I'm in the process of rewriting a legacy CMake setup to use modern features like automatic dependency propagation. (i.e. using things like target_include_directories(<target> PUBLIC <dir>) instead of include_directories(<dir>).) Currently, we manually handle all project dependency information by setting a bunch of global directory properties.
In my testing, I've found a few examples where a target in the new build will link to a library that the old build would not. I'm not linking to it explicitly, so I know this is coming from the target's dependencies, but in order to find which one(s) I have to recursively look through all of the project's CMakeLists.txts, following up the dependency hierarchy until I find one that pulls in the library in question. We have dozens of libraries so this is not a trivial process.
Does CMake provide any way to see, for each target, which of its dependencies were added explicitly, and which ones were propagated through transitive dependencies?
It looks like the --graphviz output does show this distinction, so clearly CMake knows the context internally. However, I'd like to write a tree-like script to show dependency information on the command line, and parsing Graphviz files sounds like both a nightmare and a hack.
As far as I can tell, cmake-file-api does not include this information. I thought the codemodel/target/dependencies field might work, but it lists both local and transitive dependencies mixed together. And the backtrace field of each dependency only ties back to the add_executable/add_library call for the current target.
You can parse dot file generated by graphviz and extract details which you want. Below is sample python script to do that.
import pydot
import sys
graph = pydot.graph_from_dot_file(sys.argv[1])
result = {}
for g in graph:
# print(g)
for node in g.get_node_list():
if node.get("label") != None:
result[node.get("label")] = []
for edge in g.get_edges():
result[g.get_node(edge.get_source())[0].get("label")].append(g.get_node(edge.get_destination())[0].get("label"))
for r in result:
print(r+":"+",".join(result[r]))
You can also add this script to run from cmake as custom target, so you can call it from you build system. You can find sample cmake project here
The CMake manual of Qt 5 uses find_package and says:
Imported targets are created for each Qt module. Imported target names should be preferred instead of using a variable like Qt5<Module>_LIBRARIES in CMake commands such as target_link_libraries.
Is it special for Qt or does find_package generate imported targets for all libraries? The documentation of find_package in CMake 3.0 says:
When the package is found package-specific information is provided through variables and Imported Targets documented by the package itself.
And the manual for cmake-packages says:
The result of using find_package is either a set of IMPORTED targets, or a set of variables corresponding to build-relevant information.
But I did not see another FindXXX.cmake-script where the documentation says that a imported target is created.
find_package is a two-headed beast these days:
CMake provides direct support for two forms of packages, Config-file Packages
and Find-module Packages
Source
Now, what does that actually mean?
Find-module packages are the ones you are probably most familiar with. They execute a script of CMake code (such as this one) that does a bunch of calls to functions like find_library and find_path to figure out where to locate a library.
The big advantage of this approach is that it is extremely generic. As long as there is something on the filesystem, we can find it. The big downside is that it often provides little more information than the physical location of that something. That is, the result of a find-module operation is typically just a bunch of filesystem paths. This means that modelling stuff like transitive dependencies or multiple build configurations is rather difficult.
This becomes especially painful if the thing you are trying to find has itself been built with CMake. In that case, you already have a bunch of stuff modeled in your build scripts, which you now need to painstakingly reconstruct for the find script, so that it becomes available to downstream projects.
This is where config-file packages shine. Unlike find-modules, the result of running the script is not just a bunch of paths, but it instead creates fully functional CMake targets. To the dependent project it looks like the dependencies have been built as part of that same project.
This allows to transport much more information in a very convenient way. The obvious downside is that config-file scripts are much more complex than find-scripts. Hence you do not want to write them yourself, but have CMake generate them for you. Or rather have the dependency provide a config-file as part of its deployment which you can then simply load with a find_package call. And that is exactly what Qt5 does.
This also means, if your own project is a library, consider generating a config file as part of the build process. It's not the most straightforward feature of CMake, but the results are pretty powerful.
Here is a quick comparison of how the two approaches typically look like in CMake code:
Find-module style
find_package(foo)
target_link_libraries(bar ${FOO_LIBRARIES})
target_include_directories(bar ${FOO_INCLUDE_DIR})
# [...] potentially lots of other stuff that has to be set manually
Config-file style
find_package(foo)
target_link_libraries(bar foo)
# magic!
tl;dr: Always prefer config-file packages if the dependency provides them. If not, use a find-script instead.
Actually there is no "magic" with results of find_package: this command just searches appropriate FindXXX.cmake script and executes it.
If Find script sets XXX_LIBRARY variable, then caller can use this variable.
If Find script creates imported targets, then caller can use these targets.
If Find script neither sets XXX_LIBRARY variable nor creates imported targets ... well, then usage of the script is somehow different.
Documentation for find_package describes usual usage of Find scripts. But in any case you need to consult documentation about concrete script (this documentation is normally contained in the script itself).
In a project where some targets are to be build and run on the build platform and other targets are to be build for a cross platform; what options do we have, when using cmake?
Currently I use CMAKE_BUILD_TYPE to define tool chain, build type and platform (for example -D CMAKE_BUILD_TYPE=arm_debug). In one place in the build, I switch tools (compilers, linke etc.), command line flags, libraries etc. according to the value of CMAKE_BUILD_TYPE. For every build type, I create a build directory.
This approach has it's drawbacks: multiple build directories and no easy way to depend one target from one build type on a target in an other build type (some kind of precompiler needed on the build platform by the build for the cross platform for example).
As currently every build targets has a single tool chain to be used I would love to associate a target with a target platform / tools set. This implies that some libraries have to be build for more than one target platform with different tool sets.
The 'one build type and platform per CMake run' limitation is fundamental and I would strongly advise against trying to work around it.
The proper solution here seems to me to split the build into several stages. In particular, for the scenario where a target from one build type depends on a target from another build type, you should not try to have those two targets in the same CMake project. Proper modularization is key here. Effective use of CMake's include command can help to avoid code duplication in the build scripts.
The big drawback of this approach is that the build process becomes more complex, as you now have several interdependent CMake projects that need to be built in a certain order with specific configurations. Although you already seem to be way beyond the point where you can build your whole system with a single command anyway. CMake can help manage this complexity with tools like ExternalProject, that allows you to build a CMake project from within another. Depending on your particular setup, a non-CMake layer written in your favorite scripting language might also be a viable alternative for ensuring that the different subprojects get built in the correct order.
The sad truth is though that complex build setups are hard to manage. CMake does a great job at providing a number of tools for tackling this complexity but it cannot magically make the problem easier. Most of the limitations that CMake imposes on its user are there for a reason, namely that things would be even harder if you tried to work without them.
We are currently attempting to port a very (very) large project built with ant to maven (while also moving to svn). All possibilities are being explored in remodeling the project structure to best fit the maven paradigm.
Now to be more specific, I have come across classifiers and would like to know how I could use them to my advantage, while refraining from "classifier anti-patterns".
Thanks
from: http://maven.apache.org/pom.html
classifier: You may occasionally find a fifth element on the
coordinate, and that is the classifier. We will visit the classifier
later, but for now it suffices to know that those kinds of projects
are displayed as groupId:artifactId:packaging:classifier:version.
and
The classifier allows to distinguish artifacts that were built from
the same POM but differ in their content. It is some optional and
arbitrary string that - if present - is appended to the artifact name
just after the version number. As a motivation for this element,
consider for example a project that offers an artifact targeting JRE
1.5 but at the same time also an artifact that still supports JRE 1.4. The first artifact could be equipped with the classifier jdk15 and the
second one with jdk14 such that clients can choose which one to use.
Another common use case for classifiers is the need to attach
secondary artifacts to the project's main artifact. If you browse the
Maven central repository, you will notice that the classifiers sources
and javadoc are used to deploy the project source code and API docs
along with the packaged class files.
I think the correct question would be How to use or abuse attached artifacts maven? Because basicaly that is why classifiers are introduced - to allow you to publish attached artifacts.
Well, Maven projects often implicitely use attached artifacts, e.g. by using maven-javadoc-plugin or maven-source-plugin. maven-javadoc-plugin publishes attached artifact that contains generated documentation by using a javadoc classifier, and maven-source-plugin publishes sources by using sources classifier.
Now what about explicit usage of attached artifacts? I use attached artifacts to publish harness shell scripts (start.sh and Co). It's also a good idea to publish SQL scripts in the attached artifact with a classifier sql or something like that.
How can you attach an arbitary artifact with your classifier? - this can be done with build-helper-maven-plugin.
... I would like to know how I could use them to my advantage ...
Don't use them. They are optional and arbitrary.
If you are in the middle of porting a project over to maven, keep things simple and only do what is necessary (at first) to get everything working as you'd like. Then, after things are working like you want, you can explore more advanced features of maven to do cool stuff.
This answer is based on your question sounding like a "This features sounds neat, how can I use it even though I don't have a need for it?" kind of question. If you have a need for this feature, please update your question with more information on how you were thinking of utilizing the classifier feature and we will all be more informed to help you.
In contrast to Jesse Web's answer, it is good to learn about classifiers so that you can leverage them and avoid having to refactor code in addition to porting to maven. We went through the same process a year or two ago. Previously we had everything in one code base and built together with ant. In migrating to maven, we also found the need to break out the various components into their own maven projects. Some of these projects were really libraries, but had some web resources (jsp, js, images, etc.). The end result was us creating an attached artifact (as mentioned by #Male) with the web resources, using the classifier "web-resources" and the type "war" (to use as an overlay). This was then, and still does after understanding maven better, the best solution to port an old, coupled, project. We are eventually wanting to separate out these web resources since they don't belong in this library, but at least it can be done as a separate task.
In general, you want to avoid having attached artifacts. This is typically a sign that a separate project should be created to build that artifact. I suggest looking at doing this anytime you are tempted to attach an artifact with a separate classifier.
I use classifiers to define supporting artefacts to the main artefact.
For example I have com.bar|foo-1.0.war and have some associated config called com.bar|foo-1.0-properties.zip
You can use classifers when you have different versions of the same artifact that you want to deploy to your repository.
Here's a use case:
I use them in conjunction with properties in a pom. The pom has default values which can be overriden via the command line. Running without options uses the default property value. If I build a version of the artifact with different property values, I can deploy that to the repo with a classifier.
For example, the command:
mvn -DmyProperty=specialValue package install:install-file -Dfile=target/my-ear.ear -DpomFile=my-ear/pom.xml -Dclassifier=specialVersion
Builds a version of an ear artifact with special properties and deploys the artifact to my repo with a classifier "specialVersion".
So, my repo can have my-ear-1.0.0.ear and my-ear-1.0.0-specialVersion.ear.