Following modern CMake guidelines (e.g. see https://www.slideshare.net/DanielPfeifer1/effective-cmake, particularly slide 46), I am trying to write out a PkgConfig.cmake file for my Pkg.
Pkg depends on Foo which in turn depends on Bar. Neither Foo nor Bar have config files - rather I am using FindFoo.cmake and FindBar.cmake to find them.
My PkgConfig.cmake file looks like this
set(Pkg_LIBRARIES Pkg::Pkg)
include(CMakeFindDependencyMacro)
find_dependency(Foo) # Use FindFoo.cmake find and import as target Foo::Foo
# Foo depends on Bar which is similarly imported using
# FindBar.cmake as target Bar::Bar
include("${CMAKE_CURRENT_LIST_DIR}/PkgTargets.cmake")
My resultant PkgTargets.cmake looks like
add_library(Pkg::Pkg STATIC IMPORTED_
set_target_properties(Pkg::Pkg PROPERTIES
INTERFACE_LINK_LIBRARIES "Foo::Foo")
# Load information for each installed configuration
.
.
.
My question is how can I avoid other packages importing Pkg into their project from having to specify where Foo and more importantly where Bar is to be found?
Doesn't it defeat the purpose of building transitive dependencies if the locations of Foo and Bar packages have to be specified again either through variables Foo_ROOT and Bar_ROOT or CMAKE_PREFIX_PATH?
My Pkg already knows where it was found, so should I parse/set Foo_ROOT and Bar_ROOT and put it into my PkgConfig.cmake file?
My question is how can I avoid other packages importing Pkg into their project from having to specify where Foo and more importantly where Bar is to be found?
It is perfectly allowed for PkgConfig.cmake to specify (hint) locations of its dependencies.
My Pkg already knows where it was found, so should I parse/set Foo_ROOT and Bar_ROOT and put it into my PkgConfig.cmake file?
Note, that XXXConfig.cmake files are generally prepared for the installed project to be moved into other directory on the build machine or, more important, to be copied into other machine and be used there.
Because location of Foo and Bar on the other machine may differ from one on the build machine, knowing their locations on build machine cannot help to find them on the target machine.
Nevertheless, it is up to you (as the project's developer) to specify project's usage constraints. You may, e.g., specify that the project can be used only on that machine where it has been built. In that case reusing build locations of Foo and Bar in the PkgConfig.cmake script is justified.
Moreover, even when allowing to copy the installed project into other machine, you still can use build locations of Foo and Bar as a hint for search them in the PkgConfig.cmake. So, if the project will be used on the same machine where it was built, then dependencies will be found without an user intervention. The same is true if the project will be copied to the other machine which has Foo and Bar on the same locations as on the build machine.
Related
I am trying to integrate the Refinitiv Real-Time SDK into my own application.
I have downloaded the source code and built the libraries.
Typically you would then expect there to be an INSTALL target, which would install the libraries and headers into some location, and then, if we're lucky, a find_package module which we can later use to import the library targets into our own project.
Unfortunately, neither of these are provided.
How then, to import the libraries and their header files into my project?
ExternalProject_Add
I do not want to use the standard ExternalProject_Add to download and build the source code every time I reconfigure my project. (In particular because our CI server will have to do this for every single build.) Rather I want to build it once (and make it part of the CI server's docker image), and then link against the libraries / include the header files directly from where I've copied the source.
add_library INTERFACE
I know that I can create a new INTERFACE library target
find_library(LIB_EMA ema ${REFINITIV_BINARY_DIR})
find_library(LIB_ETA eta ${REFINITIV_BINARY_DIR})
# etc.. for all the refinitiv libraries
add_library(refinitiv INTERFACE)
target_link_libraries(refinitiv INTERFACE
${LIB_EMA}
${LIB_ETA}
# etc...
)
target_include_directories(refinitiv INTERFACE
${REFINITIV_SOURCE_DIR/Ema/Include
${REFINITIV_SOURCE_DIR/Eta/Include
# etc...
)
This is, however, tedious and prone to breaking whenever Refinitiv releases a new SDK version and decides to change a path or link dependency etc
Question:
What I would to do is use their CMakeLists.txt file, but only to access the already-built targets, not to build them as part of my build.
Is this possible?
I have a small CMake project with different Build Types debug and release. I'm also providing a Debian package for this project. Building the Debian Package for release and providing it on my own Debian repository works perfect.
Now I also want to provide another Debian package for debug, due to debugging purposes, with a different package name. For example, my project is called myproject, and the debugging package should be myproject-debug.
I already read documentation about how to solve this in the debian/control file. I want to use Replaces: ... on each package vice versa, so that you can install only one of the both packages at a time. So either myproject or myproject-debug, but not both at the same time, to use the exact same files and filenames but only the binary has more debugging informations and debug prints in the myproject-debug package. Everything else should be the same. Same filename, same paths, etc.
Now the problem is that I don't know how the debian/rules file should look like, to first build the myproject package in a folder and then build the myproject-debug with different CMake options (-DCMAKE_BUILD_TYPE=debug) in a different folder, so the filenames can and should stay the same.
There is this CMake tutorial in the Debian documentation, but this doesn't fit my requirements. Because in this tutorial everything will be built in only one folder, and in this one folder there are different files. Then different .install files will be used to copy the needed files to each package. But since I have the same binary filename for each package myproject and myproject-debug this tutorial does not really fit my needs.
I already have the following lines in my debian/rules file:
override_dh_auto_configure:
dh_auto_configure -- -DCMAKE_BUILD_TYPE=release
But how can I run two different builds with two different build types?
For example, something like this, to split it up:
override_dh_auto_configure_release:
dh_auto_configure -- -DCMAKE_BUILD_TYPE=release
override_dh_auto_configure_debug:
dh_auto_configure -- -DCMAKE_BUILD_TYPE=debug
And run both in different folders so I can add both folders to two different packages.
Or maybe there is even a better solution I cannot imagine yet?
I have a CMake project to build multiple shared libraries and tools, most of these under subdirectories:
add_directory(libFirst)
add_directory(libSecond)
add_directory(myTool)
# etc...
The install(TARGET "someTarget" COMPONENT "someTarget" ...) rules are in the respective subdirectory/CMakeLists.txt files.
I would like to generate Debian packages for all of these using a make package command from the build directory. I have CPACK_DEB_COMPONENT_INSTALL set to ON.
The problem I'm facing, is that not all of the targets have the same VERSION and/or SOVERSION. For example, libFirst is at version 1.0.0.0 and libSecond is version 4.3.0.0. This means that the generated packages should also have different version, but the only way I've found to specify the version is to specify the CPACK_PACKAGE_VERSION_MAJOR, CPACK_PACKAGE_VERSION_MINOR and CPACK_PACKAGE_VERSION_PATCH variables (and perhaps the internal CPACK_PACKAGE_VERSION variable), which set the version for all generated packages.
Is there a way to set package versions per-component, for example by setting some variables similarly to the other CPACK_COMPONENT_<COMPONENT>_* or CPACK_DEBIAN_<COMPONENT>_* variables?
I don't think this is possible at the moment but I have created a merge-request (https://gitlab.kitware.com/cmake/cmake/merge_requests/2305) which provides this functionality.
I hope it will get approved but in the meantime you can locally change your CPackDeb.cmake as shown in this diff: https://gitlab.kitware.com/cmake/cmake/merge_requests/2305/diffs. The default location of that file is in /usr/share/cmake/Modules/CPackDeb.cmake.
I have a directory containing several tools which I use for independent projects, e.g.:
CommonTools
+ Tool A
+ Tool B
+ Tool C
Tool B depends on Tool A, but Tool A can be used independently from Tool B. I think I have two options:
I can install the tools under a system directory (e.g. for Windows, C:\Program Files). This is not necessarily a good thing given that some of my programs are meant to be used in the same directory as the one they are shipped in because I don't have sufficient rights to write to a system directory). Besides, I still need to locate the header files to compile projects that use those tools.
I could use find_library to locate them. Then I run into the following problem: find_library(A) won't work until I've actually built A, so I can't cmake CommonTools (because Tool B requires Tool A). I could call cmake from make, but that looks rather convoluted...
I can put relative paths to Tool A in Tool B & only use find_library for other projects. Unfortunately, this relative path changes depending on whether I'm building CommonTools or Tool B.
What are your thoughts on this? Thanks!
As I wanted to be able to perform one-step builds, this is what I ended up doing.
I distinguish the submodules of the module I'm currently building from external dependencies & third-party tools. Each (sub)module is only responsible for building itself. This means that all external dependencies & third-party tools must be already installed or available in binary + header form from a server. As a corollary, it means that a missing dependency is a binary which should be available from a given server but isn't.
Submodules are added using add_subdirectory, which means that if any of them is not available, the configuration step will fail with an explicit message.
External dependencies & third-party tools are located using find_package. The HINT location is an option which must be provided by the user performing the build (this gives an indication of the module's dependencies to the user. If any of them is not found, a binary is downloaded from a given location using ExternalProject_Add. The <module>_FOUND, <module>_LIBRARIES & <module>_INCLUDE_DIRS variables must be set manually in the CMakeLists.txt file, but given a proper directory layout on the server side (e.g. <module>-<version>-<platform>/include & <module>-<version>-<platform>/binaries), it can be done in a consistent way (e.g. using a macro). There again, if no binaries are found on the server, the configuration step will fail with an explicit message.
All of this means that the continuous integration server will correctly detect any missing dependencies (i.e. components which should be on the server but aren't or submodules which are not under version control) at configuration time rather than at build time, while still allowing one-step builds.
I hope this can be of some use to others.
PS: as a side-node to Google Test users: "gtest must be recompiled for each module because every user needs to compile his tests using the same compiler flags used to compile the installed Google Test libraries; otherwise he may run into undefined behaviors. If you compile Google Test and your test code using different compiler flags, they may see different definitions of the same class/function/variable)". This means you actually need (in my case) to run an ExternalProject_Add command in every module because each module contains its own tests.
My company uses extensive use of ivy to download dependencies. Some of these dependencies are huge (~500MB) and take a while to download from the remote repositories.
To build our application we have an ant script that will first resolve all the dependencies and the deploy to the server.
I have set an "IVY_HOME" environment variable so that all the dependencies are downloaded to D:\ivy_home instead of C:\Users\.ivy2\ - this is because D: is my SSD which is significantly faster, and it is where my local server directories are located - so copying files from ivy_home to the server is super fast.
But for some reason when I am using IvyDE plugin inside eclipse - it always wants to download a separate copy of all the dependencies and puts them into my C:\ which is causing several issues:
Local publishes from the ant script will not be picked up in eclipse since they are placed into a different location
Dependencies already downloaded in D: will not get picked up which makes the ivy Resolve inside eclipse much slower than it needs to be
The dependencies are in a slower drive in eclipse so performing searches, and executing these jars is also slower
How about creating symlink to replace the .ivy2 in Users to D? I've tried it on my own and it's looks working fine.
Open cmd as root, and then execute this line
mklink /d C:\Users\{username}\.ivy2 D:\.ivy2
I'd create an ivysettings.xml file and specify the location of my cache using the caches directive. See the following answer for example:
can I turn off the .ivy cache all together?
Why don't you set up IVY globally with the ivysettings.xml along with a property file.
This property file could have this:
ivy.default.ivy.user.dir=D:\ivy_home
For individual projects you could uncheck "enable project specific settings" for each IvyDE library management, so they would use IVY global settings, with one extra eclipse environment configuration.