A quote from the official documentation:
"Specify rules to run at install time."
What exactly is install time?
The problem for me: I am on Linux, software is installed from packages that are just dependencies and data. There is no CMake that can do anything here. So installation time of software is out of scope from CMake. So what exactly do they mean?
Building a CMake project can roughly be divided into three phases:
Configure time. This includes everything that happens while running cmake itself. This phase is concerned with inspecting certain properties of the host system and generating the specific build files for that platform under the selected configuration.
Build time. This includes everything that happens while actually building your project from the files generated by CMake (like, when running cmake --build or make). This is where all of the actual compilation and linking happens, so at the end of the build phase, you have a usable binary.
Install time. This includes everything that happens when running the INSTALL target generated by CMake (like, when running cmake --build --target install or make install). This takes care of copying the binaries that were generated into the build tree to a different directory. Note that the build tree contains a lot of stuff that is no longer needed once the build is completed if you are only interested in running the binary. Examples include all intermediate build artifacts, like the build files generated during the configure phase or the intermediate object files created during the build phase. Furthermore, the install phase might include additional steps to ensure that the binaries produced during the build are portable. For instance, on Linux systems you might want to remove the build directory from the shared library search path in the binary and replace it with a portable equivalent. So the install phase might do more than just copy all the important files to a new directory. It could also include additional steps that change the binaries to make them more portable.
Note that the last phase is optional. If you do not want to support calling make install but prefer another deployment mechanism, you simply don't use the install command in your CMake script and no INSTALL target will be generated.
I'd like to expand the answer, which ComicSansMS gave you, a little bit.
As he mentioned - CMake generates an extra target called install for the make tool (when you use a Makefile-based generator).
It may look weird for you as a package system is used for Linux. However the install target is still useful or even necessary:
When you develop your application you may need to install (move binaries and possibly some include files) to a certain location so some of your projects may see each other. For example, you may develop a library and a set of non-related applications which use it. Then this library must be installed somewhere to be visible. It doesn't mean you need to put it to the /usr directory; you may use your /home.
The process of Linux package preparation requires an install step. For example, the RPM packaging system does three main steps when the rpm package file is being built: the project is configured, then is compiled and linked and finally is being installed to a certain location. All files from this location are being packed to the rpm file.
Related
I'm missing something in my understanding of CMake+vcpkg, and I'm also missing proper keywords to search for a solution. (Plus I'm new to both CMake and vcpkg, unfortunately.)
I want to have a public repo for a C++ project that uses CMake as its build system and vcpkg as its package manager.
At my currently level of understanding the user needs to have CMake and vcpkg already installed before he can type cmake and build the repo. I'd like to make it as simple as possible to build the repo and not have a bunch of instructions telling him how to get set up even before he can build.
Is this right?
I'd like a one-step solution: After cloning the repo user types ... something ... and the repo gets built.
I am willing in this day-and-age to assume he's got CMake installed ... plus that it can find the right toolchain. So maybe all he needs to type is 'cmake' ...
Is it a reasonable assumption that the user has CMake installed and configured with his preferred toolchain?
I am not willing to assume he's got vcpkg installed.
Is it a reasonable assumption that the user does not have vcpkg installed and configured?
(TBH, I don't even know if it is CMake or vcpkg that configures the toolchain - I assumed CMake but one of the suggested questions suggests it is vcpkg ...)
What are the reasonable assumptions today, and what is the minimal-step solution?
There's nothing wrong in assuming that the user has certain tools installed.
Let's say you are developing libfoo which depends on libbar and you want to make it as easy as possible for your users to install libfoo.
With a package manager
If libfoo and libbar are available via the same package manager all your users have to do is:
vcpkg install libbar libfoo
You don't have to do anything special in libfoo for this, just instruct the user to install all dependencies in your readme.
It doesn't really matter what package manager is used.
Without a package manager
You will still want to make it easy for people to build and install your project directly. It may seem that invoking a package manager during the build or configuration phase of your project and solving all dependencies is user friendly because the user no longer has to deal with installing those, but it isn't for a number of reasons, including:
you or someone else may want to add your project to another package manager (like conan, spack, etc)
someone may want to consume libfoo with FetchContent, CPM, directly with add_subdirectory, etc
someone may not be a user of vcpkg - there's no need to force them to use it, if possible
you may want to add another dependency, libbaz, which is not available on vcpkg
a user may have the right version of libbar already installed (not necessarily through vcpkg)
This list is not exhaustive. If you're not writing a library some points don't really apply.
This means that someone who has all the dependencies already installed should be able to use libfoo like this:
git clone your-repo
cd your-repo
cmake -Bbuild
cmake --build build
cmake --install build
Resolving dependencies without a package manager
However, it may be desirable to solve dependencies automatically. If your dependencies are using CMake the easiest way of doing this is with FetchContent. For some of the reasons outlined above you should provide an escape hatch so people can still use the already installed dependencies. This can be done with an option. For example, something like FOO_USE_EXTERNAL_BAR. This can be set either to yes or no by default, there's no right answer. As long as the user can control this I don't think it matters that much. You should namespace your options to avoid possible conflicts with options used by other projects.
In this case your build script could do this:
if (FOO_USE_EXTERNAL_BAR)
find_package(bar REQUIRED)
else ()
FetchContent_Declare(
bar
GIT_REPOSITORY bar-repo
GIT_TAG release-tag
)
FetchContent_MakeAvailable(bar)
endif ()
target_link_libraries(foo PRIVATE bar::bar)
Depending on how libbar's CMakeLists.txt is written and organized the if and else branches may get more complicated. See Effective CMake for some details and tips.
Now I can either let libfoo resolve the libbar by setting FOO_USE_EXTERNAL_BAR to ON when I configure your project, or I can set it to OFF to have more control over how it is resolved. I may even use libfoo as a dependency for a project that already depends on libbar. If you always pull it in I can't avoid conflicts in this case.
Using CMake to update dependencies
You may still find it easy for you to be able to update all the project's dependencies using CMake without downloading them via FetchContent. While this will probably raise some eyebrows you could add a custom target for solving dependencies with a package manager. This should also be controllable by an option. Unlike in the above case I strongly believe that if you do this the option should be set to off by default:
if (FOO_AUTO_USE_VCPKG)
add_custom_target(
update_deps
COMMAND vcpkg install libbar
)
add_dependencies(foo update_deps)
endif ()
This will invoke vcpkg every time you build foo so it will make your builds slower. If you remove the add_dependencies call you would have to manually run the update_deps target whenever you need to (which shouldn't be that often anyway).
Notes
Using options is a great way of providing options to your users. It should be noted that they increase the cognitive load, so picking strong defaults can help with that.
FetchContent is a nice way of taking the care away from the user, but at the same time multiple projects that use it will end up re-downloading the same libraries over and over again. It is still more user friendly than invoking a package manager at build time and as long as the users can disable this behavior there's nothing to worry about.
Some parts of this answer may be regarded more as opinion and less as facts. As I said, there is no one right way of doing this, different people will have different ways of solving this problem. Different projects and different environments will have different constraints.
I already recommended the Effective CMake talk above, other useful recourses are available here. If you're a library author you may also want to take a look at Deep CMake for Library Authors.
I had this same question. For my part, I am not willing to assume that the user has either CMake or vcpkg preinstalled.
Here is my solution so far, as a Windows batch file:
#REM Bootstrap...
set VCKPG_PARENT_DIR=C:\Projects
set CMAKE_VERSION="3.20.2"
mkdir "%VCKPG_PARENT_DIR%"
pushd "%VCKPG_PARENT_DIR%"
git clone https://github.com/Microsoft/vcpkg.git
.\vcpkg\bootstrap-vcpkg.bat -disableMetrics
set PATH=%PATH%;%VCKPG_PARENT_DIR%\vcpkg\downloads\tools\cmake-%CMAKE_VERSION%-windows\cmake-%CMAKE_VERSION%-windows-i386\bin
set VCPKG_DEFAULT_TRIPLET=x64-windows
set PYTHONHOME=%VCKPG_PARENT_DIR%\vcpkg\packages\python3_x64-windows\tools\python3
popd
#REM Build the project...
cmake -B build -S .\engine\ -DCMAKE_TOOLCHAIN_FILE=%VCPKG_ROOT%\scripts\buildsystems\vcpkg.cmake -DCMAKE_BUILD_TYPE=Release -DUSE_PYTHON_3=ON
cmake --build .\build\ --config Release
mkdir bin
xcopy .\build\Release\*.* .\bin\
xcopy .\build\objconv\Release\*.* .\bin\
xcopy .\build\setup\Release\*.* .\bin\
It could use some improvement, but hopefully this gives you an idea of one route you could take.
I am cross-compiling a C library using CMake and a toolchain file. My toolchain file sets CMAKE_SYSROOT to the appropriate value so compilation works with no issues. However, when installing, the library does not install to the directory pointed to by CMAKE_SYSROOT. I can achieve that effect by running make install DESTDIR=xxx though.
I understand that there are two separate concepts here:
The cross-compilation toolchain, which consists of binaries that can be run on my local architecture
The CMAKE_SYSROOT which is the root directory of a target-architecture filesystems, containing header files and libraries, passed to e.g. gcc through the --sysroot flag.
I have two questions:
Is it a good idea to conflate the sysroot where my cross-compilation toolchain lives, with the sysroot where all my cross-compiled libraries will be installed? It feels to me like it should be the same, but am not sure, and to CMake it appears they are distinct concepts. Update: answered in the comments below, these are indeed distinct concepts.
What is the modern CMake way to specify the installation directory when cross-compiling like described above? Update: I believe this should be the same as CMAKE_SYSROOT, and I feel CMake should offer a way to only define this once somewhere.
Thanks!
There is no interference between sysroot and install directory (prefix).
Sysroot is given by CMAKE_SYSROOT variable and denotes prefix for tools used during build process.
Install directory(prefix) is given by CMAKE_INSTALL_PREFIX variable and denotes the path, where the project will be used upon installation. E.g. with install prefix /usr/local the project's executable foo expects to be run as /usr/local/bin/foo.
Note, that with default installation procedure, CMake installs files to the host machine. For install files onto the target machine, this procedure is needed to be adjusted. Parameter DESTDIR=xxx for make install is a way for install files directly to the target machine. Another way is to create a package (e.g. with CPack) on host, and install that package on target machine.
Note, that in the above paragraph it is irrelevant, whether cross-compilation took a place or not: it is possible to build the project on one machine and install it to the other, but similar one, without any cross-compilation.
We are converting a large Makefile based project to a CMake based system. I have numerous dependencies that I need to build prior to building our code. The first three dependencies are build using the following:
add_subdirectory(dependencies/libexpat/expat)
add_subdirectory(dependencies/libuuid-1.0.3)
add_subdirectory(dependencies/log4c-1.2.4)
expat has it's own CMakeLists.txt file and build with no problems. I would like to force expat to install to the staging directory before continuing. For libuuid I am using a ExternalProject_Add and as part of that process it does install into the staging directory.
Then when I build log4c, which needs expat, I can point it to the location of expat. Otherwise I would need to someone get access to the absolutely path for the temporary build location of expat.
I've tried to add the following after add_subdirectory:
add_subdirectory(dependencies/libexpat/expat)
add_subdirectory(dependencies/libuuid-1.0.3)
install(TARGETS expat LIBRARY DESTINATION ${CMAKE_INSTALL_PREFIX}/usr/lib)
add_subdirectory(dependencies/log4c-1.2.4)
Unfortunately CMake will not run expat's install code. How do I force expat to install after building but before it builds the rest of the project?
This looks like the primary use case for ExternalProject_Add, which is best used as a superbuild setup. This means that your top-level project (the "superbuild") does not build any actual code and instead consists only of ExternalProject_Add calls. Your "real" project is added as one of these "external" projects. This allows you to set up the superbuild with all dependencies, ordering, etc.
The workflow is then as follows:
Generate the superbuild project.
Build the superbuild project. This will build and install all dependencies, and also generate (and build) your real project.
Switch to the buildsystem generated for your real project and start doing further development using that. Your dependencies are already correctly set up and installed by the build of the superbuild project in the previous step.
I have a fairly complicated project that builds using CMake. The project uses CPack to generate .deb packages. When I just build the project with make (i.e. not building a .deb) a clean build takes roughly 2 minutes. However when I build a package using make package the build takes roughly 12 minutes, with most of the the extra 10 minutes spent during CPack: - Run preinstall target for....
Building a .deb "manually" using dpkg-deb takes a couple seconds at most, so I'm wondering what CPack is doing when it's running the preinstall target.
I'm not necessarily interested in why it takes a long for my project specifically. I'm more curious about how the preinstall target fits into CPack's process of building a Debian package and how CPack chooses what actions will be taken when running the preinstall target.
As far as I can tell from the CMake Mailing List and this related question, the 'preinstall target' tries to rebuild your entire project, except without logging any build output to stdout. (And likely without any flags to enable parallelism either)
Apparently, this only happens if the CMake Generator / CPACK_CMAKE_GENERATOR you are using is Unix Makefiles, and is the result of CMake install commands not carrying any target dependency information with them.
One possible workaround is lying to CPack about what generator you're using, for example by setting CPACK_CMAKE_GENERATOR to Ninja.
CMake Tutorial
In Step 6 of the CMake tutorial, it says "To create a source distribution you would type" cpack --config CPackSourceConfig.cmake. I tried this command and get four tarballs: output.tar.bz, output.tar.gz, output.tar.xzand output.tar.Z.
The tarball is just a compression of the current directory. The Makefile packed contains absolute path of my current system. It means I need to run cmake again if I distribute it to other system.
So what is the point of using cpack?
One of the point of using CPack for create source package is to not including version-control files into resulted tarball (see CPACK_SOURCE_IGNORE_FILES variable).
Also it is possible to futher modify list of files, intended for source package. E.g., one can exclude internal documentation (which is useful for developers only) but include some pre-generated files, so user doesn't require to have additional tools, like Flex/Bison, to build source package.
Though CPack (and especially its source package functionality) is poorly documented (see, e.g., this mailing and this bugreport), source package generation is relatively flexible.