Conan recipe dependencies - cmake

I am starting to learn conan and I have a doubt. Lets say I have a library that depends on boost and on cgal. Cgal it self depends also on boost.
If I have a recipe for cgal, how would I indicate that I want to use the same boost version that I am using on the library?

You need to check first which version cgal is using and set the same specific version in your new recipe.
If you use a different version, Conan will warning you, but you can force it using the override argument. So you will be able to use different versions.
Of course, if you want to use the same version and be sure that it will not change after some while, you can use the package revision too. So you will use the exactly same version for every new build.
Why Conan can not detect the version dynamically? Well, it's the chicken-egg question. To detect the requirements and generate the graph-lock, first, Conan need to know what you need as requirement in you recipe. On the other hand, you want to know the graph-lock before saying what you need, to detect the correct Boost version. Thus, you won't be able to know which Boost version is being used by cgal, before to set your own requirements and Conan be able to generate the dependence tree.

Related

Conan.io use on embeddeds Software development

Please allow me two questions to the use in Conan.io in our environment:
We are developing automotive embedded software. Usually, this includes integration of COTS libraries, most of all for communication and OS like AUTOSAR. These are provided in source code. Typical uC are Renesas RH850, RL78, or similar devices from NXP, Cypress, Infinion, and so on. We use gnumake (MinGW), Jenkins for CI, and have our own EclipseCDT distribution as standardized IDE.
My first question:
Those 3rd party components are usually full of conditional compilation to do a proper compile-time configuration. With this approach, the code and so the resulting binaries are optimized, both in size and in run-time behavior.
Besides those components, we of course have internal reusable components for different purposes. The Compile-time configuration here is not as heavy as in the above example, but still present.
In one sentence: we have a lot of compile-time configuration - what could be a good approach to set up a JFrog / Conan based environment? Stay with the sources in every project?
XRef with Conan:
Is there a way to maintain cross-reference information coming from Conan? I am looking for something like "Project xxx is using Library lll Version vvv". In that way, we would be able to automatically identify other "users" of a library in case a problem is detected.
Thanks a lot,
Stefan
Conan recipes are based on python and thus are very flexible, being able to implement any conditional logic that you might need.
As an example, the libxslt recipe in ConanCenter contains something like:
def build(self):
self._patch_sources()
if self._is_msvc:
self._build_windows()
else:
self._build_with_configure()
And following this example, the autotools build contains code like:
def _build_with_configure(self):
env_build = AutoToolsBuildEnvironment(self, win_bash=tools.os_info.is_windows)
full_install_subfolder = tools.unix_path(self.package_folder)
# fix rpath
if self.settings.os == "Macos":
tools.replace_in_file(os.path.join(self._full_source_subfolder, "configure"), r"-install_name \$rpath/", "-install_name ")
configure_args = ['--with-python=no', '--prefix=%s' % full_install_subfolder]
if self.options.shared:
configure_args.extend(['--enable-shared', '--disable-static'])
else:
configure_args.extend(['--enable-static', '--disable-shared'])
So Conan is able to implement any compile time configuration. That doesn't mean that you need to build always from sources. The parametrization of the build is basically:
Settings: for "project wide" configuration, like the OS or the architecture. Settings values typically have the same value for all dependencies
Options: for package specific configuration, like a library being static or shared. Every package can have its own value, different to other packages.
You can implement the variability model for a package with settings and options, build the most used binaries. When a given variant is requested, Conan will error saying there is not precompiled binary for that configuration. Users can specify --build=missing to build those from sources.

Produce static libs from tensorflow_cc and tensorflow_framework

As far as I understand using bazel I can only produce libtensorflow_cc.so and libtensorflow_framework.so.
I need to produce static libs that are position independent (-fPIC) because I'll link them to a dynamic lib of my own later.
I found this answer which suggest the use of a Makefile included in the project.
I successfully used it to replace the libtensorflow_cc.so but what can I do to replace libtensorflow_framework.so?
Not an actual answer, but too long for a comment.
I managed to do something like what you mention using Bazel on Windows. In particular, I wanted to make a single wrapper DLL with one or two headers (limited in functionality) that I could move around easily. I'll write a summary of the things that I did; it's rather convoluted an customized for our needs, but maybe you find something useful.
I pass --config=monolithic to the bazel build command (besides any other option that you need). That will avoid modularizing the library and thus remove the dependency to a libtensorflow_framework.so (see
tools/bazel.rc).
The goal that I build is not any of the ones in the TensorFlow repository. Instead, I add a very small program that uses my wrapper as a new Bazel target (a C++ file plus my headers headers and a BUILD file). So all of TensorFlow had to be compiled beforehand in order to compile this final dummy program.
When I get that done, I take advantage of the fact that Bazel does already compile every subgoal as a static library. I check a file under the bazel-bin directory generated for my dummy program goal with a name ending .params - there I find the path of all the static libraries that were used to compile it.
I copy all of these intermediate static libraries to somewhere else. Also, I copy a bunch of headers I will need to compile my final wrapper (TensorFlow own's, but also Eigen, Protobuf and Nsync now too). I put all of this in a build area I have prepared before.
I use NMake Makefile to produce my custom DLL, using the static libraries, the copied headers and my own thin wrapper.
And that's about it, I think. I have an ugly Bash script I run on MSYS2 that does everything for me. Usually with every new release I need to tweak one or two things (some option in the configure script, some additional headers I need to copy, etc.), but I do get it to work in the end. It's quite a lot of fiddling though, so I'm not necessarily saying you should use the same approach (but feel free to ask for details about any step if you want).
Using the -2.params files #jdehesa mentioned and bazel verbose output (-s switch), you can even create a link command to eventually statically link these intermediate static libraries. I automated this process for Windows/Linux/macOS and included it to the vcpkg package manager. To use it just run vcpkg install tensorflow:x64-windows-static. If you're interested in the sources, you'll find them here.

How to get cmake to find size of type in third-party header mpi.h?

I am working on a free-software project which involves high performance computing and the MPI library.
In my code, I need to know the size of the MPI_Offset type, which is defined in mpi.h.
Normally such projects would be build using autotools and this problem would be easily solved. But for my sins, I am working with a CMake build and I can't find any way to perform this simple task. But there must be a way to do - it is commonly done on autotools projects, so I assume it is also possible in CMake.
When I use:
check_type_size("MPI_Offset" SIZEOF_MPI_OFFSET)
It fails, because mpi.h is not included in the generated C code.
Is there a way to tell check_type_size() to include mpi.h?
This is done via CMAKE_EXTRA_INCLUDE_FILES:
INCLUDE (CheckTypeSize)
find_package(MPI)
include_directories(SYSTEM ${MPI_INCLUDE_PATH})
SET(CMAKE_EXTRA_INCLUDE_FILES "mpi.h")
check_type_size("MPI_Offset" SIZEOF_MPI_OFFSET)
SET(CMAKE_EXTRA_INCLUDE_FILES)
It may be more common to write platform checks with autotools, so here is some more information on how to write platform checks with CMake.
On a personal note, while CMake is certainly not the most pleasant exercise, for me autotools is reserved for the capital sins. It is really hard to me to defend CMake, but in this instance, it is even documented. Naturally, setting a separate "variable" that you even have to reset after the fact, instead of just passing it as a parameter, is clearly conforming to the surprising "design principles" of CMake.

Examples of Semantic Version Names

I have been reading about semver. I really like the general idea. However, when it comes to putting it to practice, I feel like I'm missing some key pieces of information. I'm not sure where the name of a library exists, or what to do with file variants. For instance, is the file name something like [framework]-[semver].min.js? Are there popular JavaScript frameworks that use semver? I don't know of any.
Thank you!
Let me try to explain you.
If you are not developing a library that you like to keep for years to come, don't bother about it.. If you prefer to version every development, read the following.
Suppose you are an architect or developer developing a library that is aimed to be used by hundreds of developers over time, in a distributed manner. You really need to be cautious of what you are doing, what your developers are adding (so interesting features that grabs your attention to push those changes in the currently distributed file). You dont know how do you tell your library users to upgrade. In what scenarios? People followed some sort of versioning, and interestingly, their thoughts all are working fine.
Then why do you need semver ?
It says "There should be a concrete specification for anything for a group of people to follow anything collectively, even though they know it in their minds". With that thought, they made a specification. They have made their observation and clubbed all the best practices in the world about versioning software mainly, and given a single website where they listed them. that is semver.org. Its main principles are :
Imagine you have already released your library with a version "lib.1.0.98", Now follow these rules for subsequent development.
Let your library is bundled and named as xyz and,
Given a version number MAJOR.MINOR.PATCH, (like xyz.MAJOR.MINOR.PATCH), increment the:
1. MAJOR version when you make incompatible API changes
(existing code of users of your library breaks if they adapt this without code changes in their programs),
2. MINOR version when you add functionality in a backwards-compatible manner
(existing code works, and some improvements in performance and features also), and
3. PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
If you are not a developer or are not in a position to develop a library of a standard, you need not worry at all about semver.
Finally, the famous [d3] library follows this practice.
Semantic Versioning only defines how to name your versions. It does not specify what you will do with your version number afterwards. You can put the version numbers in package names, you can store it in a properties file inside your application, or just publish it in a wiki. All those options are opened to discussion and not part of the problem space addressed by SemVer.
semver is used by npm and bower (and perhaps some other tools) for dependency management. Using semver it is possible to decide which versions of which packages to use if multiple libraries used depend on the same library.
As others have said, semantic versioning is a standard versioning scheme that tells your users which versions of your library should be compatible with each other, and which ones are not.
The idea, is to be able to give your users more confidence that it's safe to upgrade to a newer patch/version, because it's tried, tested, and true to being backwards compatible with the previous version (minor increments). That is, perceptively that's what your telling your users.
As far as tooling goes, I don't do much in javascript, but I typically let my build server handle stamping my assemblies etc with the correct version. I have a static major number I upgrade whenever I make breaking changes, a static minor number I upgrade everytime I add new features, and an auto-incrementing Patch number whenever I checkin bug fixes.
Especially if this is a javascript library you plan to share on a public repository of some kind (nuget, gem, etc) you probably want some for of automated packaging system, and you put the logic in there for specifying your version number (in the package meta data, in the name of the javascript file, which is typically the standard I've seen).
Take a look at sbt which is the Scala Build Tool. In it, we write dependencies like this:
val scalatest = "org.scalatest" %% "core" % "2.1.7" "test"
val jodatime = "org.joda" % "jodatime" % "1.4.5"
Wherein the operator %% means "the current version of Scala that you're building." Packaging things in this language generally create JAR files with the name like this <my project>_<scala version>_<library version>.jar which is quite handy for semantically naming things automagically. The % operator can be interpreted as "don't version this part."
That said, this resulted from the fact that the same library compiled to different Scala versions were not binary compatible with each other. So it was more as a result of, rather than a conscious design choice, the binary incompatibilities.

Ada GPS IDE can't seem to find GtkAda

I have installed both the GNAT Programming Studio (GPS) and GtkAda. They both seem to work fine, but when I try to build the Simple Window project under New Project from Template, I get a bunch of errors saying "file gtk.ads not found." This seems to be a directory/dependency sort of problem - GPS doesn't know where to look for GtkAda. I'm running Windows 7, and have GPS installed at C:\GNAT\2011, and GtkAda installed at C:\GtkAda. I tried adding GtkAda to my PATH; at the moment my PATH user variable includes C:\GNAT\2011\bin, and my Path System variable includes C:\GtkAda\bin. Any advice on resolving this problem is greatly appreciated!
There are two things here.
First, "project" is key. Whenever you're building something that depends
on a library like GtkAda, it's much much easier if (a) you use GNAT
Project to manage it, and (b) you use the GPR(s) provided by the library
- always assuming it does, of course.
In the case of GtkAda, that means that your GPR needs to "with" GtkAda;
with "gtkada";
project Tinkering is
...
Second, gnatmake or gprbuild needs to be able to find gtkada.gpr.
The easiest way is to install GtkAda in such a way that gtkada.gpr is in
the default place that gnatmake/gprbuild expect to find GPR files. This
is $prefix/lib/gnat. GtkAda obeys this convention, so you could install
GtkAda under the same root as your compiler. I don't know why that's not recommended anyway.
If you don't want to do that, you can add the correct location to the
environment variable ADA_PROJECT_PATH, for example in your case set it
to C:\GtkAda\lib\gnat.
There is a lot of good stuff in the GtkAda README at libre.adacore.com, and in
the GtkAda User's Guide which I see from the README is also included with the
installed package at (in your case) C:\GtkAda\doc\GtkAda\gtkada_ug.