How to get cmake to find size of type in third-party header mpi.h? - cmake

I am working on a free-software project which involves high performance computing and the MPI library.
In my code, I need to know the size of the MPI_Offset type, which is defined in mpi.h.
Normally such projects would be build using autotools and this problem would be easily solved. But for my sins, I am working with a CMake build and I can't find any way to perform this simple task. But there must be a way to do - it is commonly done on autotools projects, so I assume it is also possible in CMake.
When I use:
check_type_size("MPI_Offset" SIZEOF_MPI_OFFSET)
It fails, because mpi.h is not included in the generated C code.
Is there a way to tell check_type_size() to include mpi.h?

This is done via CMAKE_EXTRA_INCLUDE_FILES:
INCLUDE (CheckTypeSize)
find_package(MPI)
include_directories(SYSTEM ${MPI_INCLUDE_PATH})
SET(CMAKE_EXTRA_INCLUDE_FILES "mpi.h")
check_type_size("MPI_Offset" SIZEOF_MPI_OFFSET)
SET(CMAKE_EXTRA_INCLUDE_FILES)
It may be more common to write platform checks with autotools, so here is some more information on how to write platform checks with CMake.
On a personal note, while CMake is certainly not the most pleasant exercise, for me autotools is reserved for the capital sins. It is really hard to me to defend CMake, but in this instance, it is even documented. Naturally, setting a separate "variable" that you even have to reset after the fact, instead of just passing it as a parameter, is clearly conforming to the surprising "design principles" of CMake.

Related

Change toolchain in a subfolder with cmake [duplicate]

It seems like CMake is fairly entrenched in its view that there should be one, and only one, CMAKE_CXX_COMPILER for all C++ source files. I can't find a way to override this on a per-target basis. This makes a mix of host-and-cross compiling in a single CMakeLists.txt very difficult with the built-in CMake facilities.
So, my question is: what's the best way to use multiple compilers for the same language (i.e. C++)?
It's impossible to do this with CMake.
CMake only keeps one set of compiler properties which is shared by all targets in a CMakeLists.txt file. If you want to use two compilers, you need to run CMake twice. This is even true for e.g. building 32bit and 64bit binaries from the same compiler toolchain.
The quick-and-dirty way around this is using custom commands. But then you end up with what are basically glorified shell-scripts, which is probably not what you want.
The clean solution is: Don't put them in the same CMakeLists.txt! You can't link between different architectures anyway, so there is no need for them to be in the same file. You may reduce redundancies by refactoring common parts of the CMake scripts into separate files and include() them.
The main disadvantage here is that you lose the ability to build with a single command, but you can solve that by writing a wrapper in your favorite scripting language that takes care of calling the different CMake-makefiles.
You might want to look at ExternalProject:
http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html
Not impossible as the top answer suggests. I have the same problem as OP. I have some sources for cross compiling for a raspberry pi pico, and then some unit tests that I am running on my host system.
To make this work, I'm using the very shameful "set" to override the compiler in the CMakeLists.txt for my test folder. Works great.
if(DEFINED ENV{HOST_CXX_COMPILER})
set(CMAKE_CXX_COMPILER $ENV{HOST_CXX_COMPILER})
else()
set(CMAKE_CXX_COMPILER "g++")
endif()
set(CMAKE_CXX_FLAGS "")
The cmake devs/community seems very against using set to change the compiler since for some reason. They assume that you need to use one compiler for the entire project which is an incorrect assumption for embedded systems projects.
My solution above works, and fits the philosophy I think. Users can still change their chosen compiler via environment variables, if it's not set then I do assume g++. set only changes variables for the current scope, so this doesn't affect the rest of the project.
To extend #Bill Hoffman's answer:
Build your project as a super-build, by using some kind of template like the one here https://github.com/Sarcasm/cmake-superbuild
which will configure both the dependencies and your project as an ExternalProject (standalone cmake configure/build/install environment).

Portable whole-archive linking in CMake

If you want to link a static library into an shared library or executable while keeping all the symbols visible (e.g. so you can dlopen it later to find them), a non-portable way to do this on Linux/BSD is to use the flag -Wl,--whole-archive. On macOS, the equivalent flag is -Wl,-force_load,<library>; on Windows it's apparently /WHOLEARCHIVE.
Is there a portable way to do this in CMake?
I know I can add linker flags with target_link_libraries. I can detect the OS. However, since the macOS version of this includes the library name in the same string as the flag (no spaces), I think this messes with CMake's usual handling of link targets and so on. The more compatible I try to make this, the more I have to bend over backwards to make it happen.
And this is without even getting into more unusual compilers like Intel, PGI, Cray, IBM, etc. Those may not be compilers that people commonly deal with, but in some domains it's basically unavoidable to need to deal with these.
Are there any better options?
flink.cmake will help you.
target_force_link_libraries(<target>
<PRIVATE|PUBLIC|INTERFACE> <item>...
[<PRIVATE|PUBLIC|INTERFACE> <item>...]...
)

Compile a compiler as an external project and use it?

I have to build a slightly patched version of GCC for a project and than compile the rest of the project with it. I am wondering what is the best way to do this. I am currently using ExternalProject_Add to build the compiler as a dependency for other binaries, but I don't know how to change the compiler for a part of the project.
Your best bet is probably to structure things as a superbuild. A top level project would have two subprojects built using ExternalProject_Add. The compiler would be the first subproject and the second would be your actual project which could then make use of the compiler by making your real subproject depend on the compiler subproject.
A second alternative is discussed here where your actual project remains as the top level project, but the compiler is built as a sub-build invoked via external_process(). I've used this approach for real world situations and while it does work, I'd personally still go with the superbuild approach if I had the choice since it's a bit cleaner and perhaps better understood by other developers.
Lastly, consider whether something like hunter might be able to take care of building your compiler for you. Depending on what/how you need to patch GCC, this may or may not be the most attractive approach.
I would use two CMake projects, one building your compiler, the other building your actual project with your built compiler. The CMake compiler is such an essential part of CMake, that changing is asking for trouble.
If you prefer having everything in one project, you can call your actual project with add_custom_command and call for a sub-folder. As a user this would be more surprising, but could lead to a better integration.

Getting imported targets through `find_package`?

The CMake manual of Qt 5 uses find_package and says:
Imported targets are created for each Qt module. Imported target names should be preferred instead of using a variable like Qt5<Module>_LIBRARIES in CMake commands such as target_link_libraries.
Is it special for Qt or does find_package generate imported targets for all libraries? The documentation of find_package in CMake 3.0 says:
When the package is found package-specific information is provided through variables and Imported Targets documented by the package itself.
And the manual for cmake-packages says:
The result of using find_package is either a set of IMPORTED targets, or a set of variables corresponding to build-relevant information.
But I did not see another FindXXX.cmake-script where the documentation says that a imported target is created.
find_package is a two-headed beast these days:
CMake provides direct support for two forms of packages, Config-file Packages
and Find-module Packages
Source
Now, what does that actually mean?
Find-module packages are the ones you are probably most familiar with. They execute a script of CMake code (such as this one) that does a bunch of calls to functions like find_library and find_path to figure out where to locate a library.
The big advantage of this approach is that it is extremely generic. As long as there is something on the filesystem, we can find it. The big downside is that it often provides little more information than the physical location of that something. That is, the result of a find-module operation is typically just a bunch of filesystem paths. This means that modelling stuff like transitive dependencies or multiple build configurations is rather difficult.
This becomes especially painful if the thing you are trying to find has itself been built with CMake. In that case, you already have a bunch of stuff modeled in your build scripts, which you now need to painstakingly reconstruct for the find script, so that it becomes available to downstream projects.
This is where config-file packages shine. Unlike find-modules, the result of running the script is not just a bunch of paths, but it instead creates fully functional CMake targets. To the dependent project it looks like the dependencies have been built as part of that same project.
This allows to transport much more information in a very convenient way. The obvious downside is that config-file scripts are much more complex than find-scripts. Hence you do not want to write them yourself, but have CMake generate them for you. Or rather have the dependency provide a config-file as part of its deployment which you can then simply load with a find_package call. And that is exactly what Qt5 does.
This also means, if your own project is a library, consider generating a config file as part of the build process. It's not the most straightforward feature of CMake, but the results are pretty powerful.
Here is a quick comparison of how the two approaches typically look like in CMake code:
Find-module style
find_package(foo)
target_link_libraries(bar ${FOO_LIBRARIES})
target_include_directories(bar ${FOO_INCLUDE_DIR})
# [...] potentially lots of other stuff that has to be set manually
Config-file style
find_package(foo)
target_link_libraries(bar foo)
# magic!
tl;dr: Always prefer config-file packages if the dependency provides them. If not, use a find-script instead.
Actually there is no "magic" with results of find_package: this command just searches appropriate FindXXX.cmake script and executes it.
If Find script sets XXX_LIBRARY variable, then caller can use this variable.
If Find script creates imported targets, then caller can use these targets.
If Find script neither sets XXX_LIBRARY variable nor creates imported targets ... well, then usage of the script is somehow different.
Documentation for find_package describes usual usage of Find scripts. But in any case you need to consult documentation about concrete script (this documentation is normally contained in the script itself).

Does Ada have a preprocessor?

To support multiple platforms in C/C++, one would use the preprocessor to enable conditional compiles. E.g.,
#ifdef _WIN32
#include <windows.h>
#endif
How can you do this in Ada? Does Ada have a preprocessor?
The answer to your question is no, Ada does not have a pre-processor that is built into the language. That means each compiler may or may not have one and there is not "uniform" syntax for pre-processing and things like conditional compilation. This was intentional: it's considered "harmful" to the Ada ethos.
There are almost always ways around a lack of a preprocessor but often times the solution can be a little cumbersome. For example, you can declare the platform specific functions as 'separate' and then use build-tools to compile the correct one (either a project system, using pragma body replacement, or a very simple directory system... put all the windows files in /windows/ and all the linux files in /linux/ and include the appropriate directory for the platform).
All that being said, GNAT realized that sometimes you need a preprocessor and has created gnatprep. It should work regardless of the compiler (but you will need to insert it into your build process). Similarly, for simple things (like conditional compilation) you can probably just use the c pre-processor or even roll your own very simple one.
AdaCore provides the gnatprep preprocessor, which is specialized for Ada. They state that gnatprep "does not depend on any special GNAT features", so it sounds as though it should work with non-GNAT Ada compilers. Their User Guide also provides some conditional compilation advice.
I have been on a project where m4 was used as well, with the Ada spec and body files suffixed as ".m4s" and ".m4b", respectively.
My preference is really to avoid preprocessing altogether, and just use specialized bodies, setting up CM and the build process to manage them.
No but the CPP preprocessor or m4 can be called on any file on the command line or using a building tool like make or ant. I suggest calling your .ada file something else. I have done this for some time on java files. I call the java file .m4 and use a make rule to create the .java and then build it in the normal way.
I hope that helps.
Yes, it has.
If you are using GNAT compiler, you can use gnatprep for doing the preprocessing, or if you use GNAT Programming Studio you can configure your project file to define some conditional compilation switches like
#if SOMESWITCH then
-- Your code here is executed only if the switch SOMESWITCH is active in your build configuration
#end if;
In this case you can use gnatmake or gprbuild so you don't have to run gnatprep by hand.
That's very useful, for example, when you need to compile the same code for several different OS's using even different cross-compilers.
Some old Ada1983-era compilers have a package called a.app that utilized a #-prefixed subset of Ada (interpreted at build-time) as a preprocessing language for generating Ada (to be then translated to machine code at compile-time). Rational's Verdix Ada Development System (VADS) appears to be the progenitor of a.app among several Ada compilers. Sun Microsystems, for example, derived the Ada SPARCompiler from VADS and thus also had a.app. This is not unlike the use of PL/I as the preprocessor of PL/I, which IBM did.
Chapter 2 is some documentation of what a.app looks like: http://dlc.sun.com/pdf/802-3641/802-3641.pdf
No, it does not.
If you really want one, there are ways to get one (Use C's, use a stand-alone one, etc.) However I'd argue against it. It was a purposeful design decision to not have one. The whole idea of a preprocessor is very un-Ada.
Most of what C's preprocessor is used for can be accomplished in Ada in other more reliable ways. The only major exception is in making minor changes to a source file for cross-platform support. Given how much this gets abused in a typical cross-platform C program, I'm still happy there's no support for it in Ada. Very few C/C++ developers can control themselves enough to keep the changes "minor". The result may work, but is often nearly impossible for a human to read.
The typical Ada way to accomplish this would be to put the different code in different files and use your build system to somehow choose between them at compile time. Make is plenty powerful enough to help you do this.