How to build several configurations at once with CMake? - cmake

E.g. how should I build release and debug version at the same time? I guess the answer make use of cache variables and some kind of "collection" of them.
Is it common way to get configuration params from cache params, isn'it ? If the answer is yes, how should I use several "collections" of them in a best way ?
Thanks a lot!

You don't specify the platform you are talking about. The Makefiles based generators will only build one configuration at a time, and the normal way to build several configurations is to use separate build trees, e.g. one for 64-bit Linux on Intel, one for 32-bit Windows, etc. Most CMake projects advise out of source builds, and assuming you wrote your CMakeLists files correctly you could have ~/src/YourProject, and ~/build/YourProject-Release, ~/build/YourProject-Debug.
This is the advised way to do it, assuming your source tree does not have any CMakeCache.txt etc in it. You can then run cmake -DCMAKE_BUILD_TYPE:STRING=Debug ~/src/YourProject in the debug directory, and similar for the release. This has the advantage that you can point dependent projects at the appropriate configuration.
The Boost CMake project has also explored building all configurations in the same build tree using library name mangling to differentiate. This may be worth looking at if you must build all configurations in the same build tree.

(for fellow googlers)
Be careful of not confusing build types and build configurations.
If you really mean "build types" such as debug and release and want to build them at the same time, then Cmake FAQ gives an answer : How can I build multiple modes without switching
Basically it involves using several out-of-source builds.

Related

When I should use find_package

I am learning CMake, and I feel hard to understand when I should use find_package.
For separate compilation, we need to let the compiler knows where to find the header file, and this could be done by target_include_directories. For linking, we need to let the linker knows where the implementation is, and this could be done by target_link_libraries. It seems like that is all we need to do to compile a project. Could anyone explain why and when we should use find_package?
If a package you intend allows for the use of find_package, you should use it. If a package comes with a working configuration script, it'll encourage you to use the library the way it's intended to be used likely come with a simple way to add include directories and dependencies required.
When is it possible to use find_package?
There needs to be either a configuration script (<PackageName>Config.cmake or packagename-config.cmake) that gets installed with the package or find script (Find<PackageName>.cmake). The latter one in some cases even comes with the cmake installation instead of the package installed, see CMake find modules.
Should you create missing scripts yourself?
There are several benefits in creating a package configuration script yourself, even if a package doesn't come with a existing configuration or find script:
The scripts separate the information about libraries from the logic used to create your own target. The use of the 2 commands find_package and target_link_libraries is concise and any logic you may need to collect and apply information like dependencies, include directories, minimal versions of the C++ standard to use, ect. would probably take up much more space in your CMakeLists.txt files thus making it harder to understand.
If makes library used easy to replace. Basically all it takes to go with a different version of the same package would be to modify CMAKE_PREFIX_PATH, CMAKE_MODULE_PATH or package-specific <PackageName>_ROOT variables. If you ever want to try out different versions of the same library, this is incredibly useful.
The logic is reuseable. If you need to use the same functionality in a different project, it takes little effort to reuse the same logic. Even if a library is only used within a single project, but in multiple places, the use of find_package can help keeping the logic for "importing" a lib close to its use (see also the first bullet point).
There can be multiple versions of the same library with automatic selection of applicable ones. Note that this requires the use of a version file, but this file allows you to specify, if a version of the package is suitable for the current project. This allows for the checking the target architecture, ect. This is helpful when cross compiling or when providing both 32 and 64 bit versions of a library on Windows: If a version file indicates a mismatch the search for a suitable version simply continues with different paths instead of failing fatally when considering the first mismatch.
You will probably find CMake's guide on using dependencies helpful. It describes find_package and alternatives, and when each one is relevant / useful. Here's an excerpt from the section on find_package (italics added):
A package needed by the project may already be built and available at some location on the user's system. That package might have also been built by CMake, or it could have used a different build system entirely. It might even just be a collection of files that didn't need to be built at all. CMake provides the find_package() command for these scenarios. It searches well-known locations, along with additional hints and paths provided by the project or user. It also supports package components and packages being optional. Result variables are provided to allow the project to customize its own behavior according to whether the package or specific components were found.
find_package requires that the package provide CMake support in the form of specific files that describe the package's contents to CMake. Some library authors provide this support (the most desirable scenario for you, the package consumer), some don't but are prominent enough that CMake itself comes with such files for those packages, or in the worst case, there is no CMake support at all, in which case you can either do something to get the either of the previous good outcomes, or perform some kludges to get the job done (ie. define the targets yourself in your project's CMake config).

Force CMake to install targets to architecture-specific directories?

I'm currently having this issue with the Google Protobuf Library, but it is a recurring problem and will likely occur with many if not all 3rd-party packages that I want to build and install from source.
I'm developing for Windows, and we need to be able to generate both 32-bit and 64-bit versions of our DLLs. It was relatively straightforward to get CMake to install our own modules to architecture-specific subdirectories, e.g. D:\libraries\bin\i686 and d:\libraries\lib\i686 (and sim. for bin). But I'm having trouble achieving the same thing with 3rd-party libraries such as Protobuf.
I could, of course, use distinct CMAKE_INSTALL_PREFIX and CMAKE_PREFIX_PATH combinations (e.g. D:\libraries-i686 and D:\libraries-x86_64, and will probably end up doing just that, but it bothers me that there doesn't seem to be a better alternative. The docs for find_package() clearly show that the search procedure does attempt architecture-specific search paths, so why do the CMake files of popular libraries not generally seem to support installing to architecture-specific subdirectories?
Or could it be that it is just a matter of setting the right CMAKE_XXX variable?
Thanks to #arrowd for pointing me in the right direction, I now have my answer, though it is not exactly what I had hoped for.
CMAKE_LIBRARY_OUTPUT_DIRECTORY and CMAKE_RUNTIME_OUTPUT_DIRECTORY, however, specify the build output directories, not the install directories. As it turns out though, there are variables for the install directories too, called CMAKE_INSTALL_BINDIR and CMAKE_INSTALL_LIBDIR - they are actually plainly visible (along with plenty more) in the cmake-gui interface when "Advanced" is checked.
I tried setting those two manually (to bin\i686 and lib\i686), and it works: the Protobuf INSTALL target copies the files where I wanted to have them, i.e. where the CMake script of my consumer project will find them in an architecture-safe manner.
I'm not sure how I feel about this - I would have preferred something like a CMAKE_INSTALL_ARCHITECTURE or CMAKE_ARCHITECTURE_SUBDIR variable that CMake would automatically append to relevant install paths. The solution above requires overriding defaults that I would prefer to leave untouched.
Under the circumstances, my fallback approach might still be the better option. That approach however requires that the choice of architecture be made very early on, typically when running the script that initializes the CMake-specific environment variables that will be passed to cmake when configuring build directories. And it's worse when using cmake-gui, which requires the user to set all directories manually.
In the end, I'm still undecided.

How to setup CLion to use waf as build system

I am trying to configure my Intellij Clion IDE for working with ns-3. Since ns-3 is using waf, it is more tricky than i thought and would be really happy to hear any advice
CLion supports compilation databases for quite some while, which waf, luckily, is able to generate using the clang_compilation_database extension.
You'll need to load it within your configuration and option step; i.e. like this:
def options(ctx):
# Assuming you just copied the script into a directory called tools
ctx.load('clang_compilation_database', tooldir='tools')
# ...
def configure(ctx):
ctx.load('clang_compilation_database', tooldir='tools')
# ...
Now you can call waf clangdb; you'll be presented a file called 'compile_commands.json' in your build directory.
CLion only uses cmake for its internal project definition - so you have to have a cmake config.
It can be very simple and mirror parts of another build system you actually use, but how CLion treats files and what it does when you tell it to build something is defined by cmake and only cmake.
You could setup compilation databases as suggested by Julian or you could try my fork, if you don't mind using a not completely up-to-date fork of the upstream project. https://github.com/Gabrielcarvfer/NS3.
Visual Studio can also be used with CMake projects and WSL, but ClangCL/MSVC support is being worked on.
I plan on opening a MR to upstream the CMake support, but it is a lot of work to replace Waf completely.

What does the build configuration "profile" do?

I just upgraded to Qt 5.6.0. I noticed something that I did not notice before. A new project in Qt Creator gets now three build configurations: "release", "debug" and "profile", It seems to me that "profile" is new. If so what is its purpose?
When I looked at Projects -> Build steps -> Effective qmake call,
I have found this additional CONFIGS:
"CONFIG+=force_debug_info" "CONFIG+=separate_debug_info"
Also I found some comments for them that could clarify why this is needed:
Hello all,
once more I'm preparing a Qt 5 build for profiling purposes and wonder again
why there is no way to combine -release and -debug in Qt's configure
script. The only way to get a sane build for profiling Qt code itself that
I know of is hacking the mkspec and ensuring that -g is added even in
-release mode.
Is there any reason for that? Could we improve this situation somehow to
make it simpler to get a Qt build with optimizations and debug symbols?
Am I missing the recommended way to get this done?
This option produces release builds (with all compiler optimization), but with debug symbols (pdb files) that are required for testing performance of C++.
According to the online Qt documentation (Breadcrumb: "Qt Creator Manual" > "Specifying Build Settings", end of 1st paragraph):
A profile configuration is an optimized release build that is delivered with separate debug information. It is best suited for analyzing applications.
Link can be found here. I'm still a Qt newbie, and have personally never used this configuration.

Determine all of the file dependencies in a build process that uses makefiles and ant scripts

I'm trying to understand the build process of a codebase. The project uses both autoconf (configure scripts that generate makefiles) and Maven.
I would like to be able identify all of the file dependencies in the project, so that for any output file that ends up being generated by a build, I can identify how it was actually produced. Ultimately, I'd like to generate a diagram using something like graphviz to visualize the dependencies, but for now I just want to extract them.
Is there any automated way to do this? In other words, given some makefiles and Maven or ant XML files, and the name of the top-level target, is there a way to identify all of the files that will be generated, the programs used to generate them, and the input files associated with those programs?
Electric Accelerator and ClearCase are two systems that do this, by running the build and watching what it does (presumably by intercepting operating system calls). This has the advantage of working for any tool, and being unaffected by buggy makefiles (hint: they're all buggy).
That's probably the only reliable way for non-trivial makefiles, since they all do things like generating new make rules on the fly, or have behaviour that depends on the existence of files on disk that are not explicitly listed in rules.
I don't know about the maven side, but once you've ./configured the project, you could grep through the output of make -pd (make --print-data-base --dry-run) to find the dependencies. This will probably be more annoying if it's based on recursive make, but still manageable.
Note that if you're using automake, it computes detailed dependencies as a side-effect of compilation, so you won't get all the dependencies on #included headers until you do a full build.