If you want to link a static library into an shared library or executable while keeping all the symbols visible (e.g. so you can dlopen it later to find them), a non-portable way to do this on Linux/BSD is to use the flag -Wl,--whole-archive. On macOS, the equivalent flag is -Wl,-force_load,<library>; on Windows it's apparently /WHOLEARCHIVE.
Is there a portable way to do this in CMake?
I know I can add linker flags with target_link_libraries. I can detect the OS. However, since the macOS version of this includes the library name in the same string as the flag (no spaces), I think this messes with CMake's usual handling of link targets and so on. The more compatible I try to make this, the more I have to bend over backwards to make it happen.
And this is without even getting into more unusual compilers like Intel, PGI, Cray, IBM, etc. Those may not be compilers that people commonly deal with, but in some domains it's basically unavoidable to need to deal with these.
Are there any better options?
flink.cmake will help you.
target_force_link_libraries(<target>
<PRIVATE|PUBLIC|INTERFACE> <item>...
[<PRIVATE|PUBLIC|INTERFACE> <item>...]...
)
Related
It seems like CMake is fairly entrenched in its view that there should be one, and only one, CMAKE_CXX_COMPILER for all C++ source files. I can't find a way to override this on a per-target basis. This makes a mix of host-and-cross compiling in a single CMakeLists.txt very difficult with the built-in CMake facilities.
So, my question is: what's the best way to use multiple compilers for the same language (i.e. C++)?
It's impossible to do this with CMake.
CMake only keeps one set of compiler properties which is shared by all targets in a CMakeLists.txt file. If you want to use two compilers, you need to run CMake twice. This is even true for e.g. building 32bit and 64bit binaries from the same compiler toolchain.
The quick-and-dirty way around this is using custom commands. But then you end up with what are basically glorified shell-scripts, which is probably not what you want.
The clean solution is: Don't put them in the same CMakeLists.txt! You can't link between different architectures anyway, so there is no need for them to be in the same file. You may reduce redundancies by refactoring common parts of the CMake scripts into separate files and include() them.
The main disadvantage here is that you lose the ability to build with a single command, but you can solve that by writing a wrapper in your favorite scripting language that takes care of calling the different CMake-makefiles.
You might want to look at ExternalProject:
http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html
Not impossible as the top answer suggests. I have the same problem as OP. I have some sources for cross compiling for a raspberry pi pico, and then some unit tests that I am running on my host system.
To make this work, I'm using the very shameful "set" to override the compiler in the CMakeLists.txt for my test folder. Works great.
if(DEFINED ENV{HOST_CXX_COMPILER})
set(CMAKE_CXX_COMPILER $ENV{HOST_CXX_COMPILER})
else()
set(CMAKE_CXX_COMPILER "g++")
endif()
set(CMAKE_CXX_FLAGS "")
The cmake devs/community seems very against using set to change the compiler since for some reason. They assume that you need to use one compiler for the entire project which is an incorrect assumption for embedded systems projects.
My solution above works, and fits the philosophy I think. Users can still change their chosen compiler via environment variables, if it's not set then I do assume g++. set only changes variables for the current scope, so this doesn't affect the rest of the project.
To extend #Bill Hoffman's answer:
Build your project as a super-build, by using some kind of template like the one here https://github.com/Sarcasm/cmake-superbuild
which will configure both the dependencies and your project as an ExternalProject (standalone cmake configure/build/install environment).
I am working on a free-software project which involves high performance computing and the MPI library.
In my code, I need to know the size of the MPI_Offset type, which is defined in mpi.h.
Normally such projects would be build using autotools and this problem would be easily solved. But for my sins, I am working with a CMake build and I can't find any way to perform this simple task. But there must be a way to do - it is commonly done on autotools projects, so I assume it is also possible in CMake.
When I use:
check_type_size("MPI_Offset" SIZEOF_MPI_OFFSET)
It fails, because mpi.h is not included in the generated C code.
Is there a way to tell check_type_size() to include mpi.h?
This is done via CMAKE_EXTRA_INCLUDE_FILES:
INCLUDE (CheckTypeSize)
find_package(MPI)
include_directories(SYSTEM ${MPI_INCLUDE_PATH})
SET(CMAKE_EXTRA_INCLUDE_FILES "mpi.h")
check_type_size("MPI_Offset" SIZEOF_MPI_OFFSET)
SET(CMAKE_EXTRA_INCLUDE_FILES)
It may be more common to write platform checks with autotools, so here is some more information on how to write platform checks with CMake.
On a personal note, while CMake is certainly not the most pleasant exercise, for me autotools is reserved for the capital sins. It is really hard to me to defend CMake, but in this instance, it is even documented. Naturally, setting a separate "variable" that you even have to reset after the fact, instead of just passing it as a parameter, is clearly conforming to the surprising "design principles" of CMake.
I have to build a slightly patched version of GCC for a project and than compile the rest of the project with it. I am wondering what is the best way to do this. I am currently using ExternalProject_Add to build the compiler as a dependency for other binaries, but I don't know how to change the compiler for a part of the project.
Your best bet is probably to structure things as a superbuild. A top level project would have two subprojects built using ExternalProject_Add. The compiler would be the first subproject and the second would be your actual project which could then make use of the compiler by making your real subproject depend on the compiler subproject.
A second alternative is discussed here where your actual project remains as the top level project, but the compiler is built as a sub-build invoked via external_process(). I've used this approach for real world situations and while it does work, I'd personally still go with the superbuild approach if I had the choice since it's a bit cleaner and perhaps better understood by other developers.
Lastly, consider whether something like hunter might be able to take care of building your compiler for you. Depending on what/how you need to patch GCC, this may or may not be the most attractive approach.
I would use two CMake projects, one building your compiler, the other building your actual project with your built compiler. The CMake compiler is such an essential part of CMake, that changing is asking for trouble.
If you prefer having everything in one project, you can call your actual project with add_custom_command and call for a sub-folder. As a user this would be more surprising, but could lead to a better integration.
I have created a static library containing all my generic classes. Some of these classes use frameworks.
Now I have two projects, one that uses some classes that use frameworks, and one that doesn't use any of the classes that use frameworks.
Because Static Libraries don't support including frameworks (if I am correct). I have to include the frameworks in the project that uses them. But when I compile the project that doesn't use any of the framework-classes the compiler breaks because it still requires the frameworks. Now I know it tries to compile all the (unused) classes from the library because I use the Linker Flag '-ObjC' to prevent 'unrecognized selector' errors.
Does anyone know how to compile only the required source files per project? And prevent from all frameworks having to be included in all projects that use my static library?
First of all, you are right in that a static library cannot include any framework nor other static libraries, it is just the collection of all object files (*.obj) that make up that specific static library.
Does anyone know how to compile only the required source files per project?
The linker will by default only link in object files from the static library that contain symbols referenced by the application. So, if you have two files a.m and b.m in your static library and you only use symbols from a.m in your main program, then b.o (the object file generated from b.c) will not appear in your final executable. As a sub-case, if b.m uses a function/class c which is only declared (not implemented), then you will not get any linker errors. As soon as you include some symbols from b.m in your program, b.o will also be linked and you will get linker errors due to the missing implementation of c.
If you want this kind of selection to happen at symbol rather than at object level granularity, enable dead code stripping in Xcode. This corresponds to the gcc option -Wl,-dead_strip (= linker option -dead_strip in the Build settings Info pane for your project). This would ensure further optimization.
In your case, though, as you correctly say, it is the use of the "-ObjC" linker flag that defeats this mechanism. So this actually depends on you. If you remove the -Objc flag, you get the behavior you like for free, while losing the stricter check on selectors.
And prevent from all frameworks having to be included in all projects that use my static library?
Xcode/GCC support an linking option which is called "weak linking", which allows to lazily load a framework or static library, i.e., only when one of its symbols is actually used.
"weak linking" can be enabled either through a linker flag (see Apple doc above), or through Xcode UI (Target -> Info -> General -> Linked Libraries).
Anyhow, the framework or library must be available in all cases at compile/link time: the "weak" option only affects the moment when the framework is first loaded at runtime. Thus, I don't think this is useful for you, since you would need anyway to include the framework in all of your projects, which is what you do not want.
As a side note, weak_linking is an option that mostly make sense when using features only available on newer SDK version (say, 4.3.2) while also supporting deployment on older SDK versions (say, 3.1.3). In this case, you rely on the fact that the newer SDK frameworks will be actually available on the newer deployment devices, and you conditionally compile in the features requiring it, so that on older devices they will not be required (and will not produce thus the attempt at loading the newer version of the framework and the crash).
To make things worse, GCC does not support a feature known as "auto-linking" with Microsoft compilers, which allow to specify which library to link by means of a #pragma comment in your source file. This could offer a workaround, but is not there.
So, I am really sorry to have to say that you should use a different approach that could equally satisfy your needs:
remove the -ObjC flag;
split your static library in two or more parts according to their dependencies from external frameworks;
resort to including the source files directly.
Abour second part of your question, you can mark a linked framework as Optional :
About first part, it is not clear to me what you intend to do:
A library being declared in a project
A project declaring which files are compiled (via Target > Build phases > Compile sources)
Unless setting complex build rules to include or not files, which if I remember well can be done using .xcconfig files, I don't see any other solutions than splitting your Library. Which I would recommend, for its ease. You should even do several targets in the same project... You could also just use precompiler MACROS (#ifdef...) but that depends on what you want to do.
It sounds like you have library bloat. To keep things small I think you need to refactor your library into separate libraries with minimal dependencies. You could try turning on "Dead Code Stripping" in the "Linker Flags" section of the build target info (Xcode 3.x) to see if that does what you want (doesn't require frameworks used by classes that are dead-stripped.)
When you link against a framework on iOS I don't think that really adds any bloat since the framework is on the device and not in your application. But your library is still a bit bloated by having entire classes that never get used but are not stripped out of the library.
A static library is built before your app is compiled, and then the whole thing is linked into your app. There's no way to include some parts of the library but not others -- you get the whole enchilada.
Since you have the source code for the library, why not just add the code directly to each application? That way you can control exactly what goes into each app. You can still keep your generic classes together in the same location, and use the same code in both apps, but you avoid the hassle of using a library.
To support multiple platforms in C/C++, one would use the preprocessor to enable conditional compiles. E.g.,
#ifdef _WIN32
#include <windows.h>
#endif
How can you do this in Ada? Does Ada have a preprocessor?
The answer to your question is no, Ada does not have a pre-processor that is built into the language. That means each compiler may or may not have one and there is not "uniform" syntax for pre-processing and things like conditional compilation. This was intentional: it's considered "harmful" to the Ada ethos.
There are almost always ways around a lack of a preprocessor but often times the solution can be a little cumbersome. For example, you can declare the platform specific functions as 'separate' and then use build-tools to compile the correct one (either a project system, using pragma body replacement, or a very simple directory system... put all the windows files in /windows/ and all the linux files in /linux/ and include the appropriate directory for the platform).
All that being said, GNAT realized that sometimes you need a preprocessor and has created gnatprep. It should work regardless of the compiler (but you will need to insert it into your build process). Similarly, for simple things (like conditional compilation) you can probably just use the c pre-processor or even roll your own very simple one.
AdaCore provides the gnatprep preprocessor, which is specialized for Ada. They state that gnatprep "does not depend on any special GNAT features", so it sounds as though it should work with non-GNAT Ada compilers. Their User Guide also provides some conditional compilation advice.
I have been on a project where m4 was used as well, with the Ada spec and body files suffixed as ".m4s" and ".m4b", respectively.
My preference is really to avoid preprocessing altogether, and just use specialized bodies, setting up CM and the build process to manage them.
No but the CPP preprocessor or m4 can be called on any file on the command line or using a building tool like make or ant. I suggest calling your .ada file something else. I have done this for some time on java files. I call the java file .m4 and use a make rule to create the .java and then build it in the normal way.
I hope that helps.
Yes, it has.
If you are using GNAT compiler, you can use gnatprep for doing the preprocessing, or if you use GNAT Programming Studio you can configure your project file to define some conditional compilation switches like
#if SOMESWITCH then
-- Your code here is executed only if the switch SOMESWITCH is active in your build configuration
#end if;
In this case you can use gnatmake or gprbuild so you don't have to run gnatprep by hand.
That's very useful, for example, when you need to compile the same code for several different OS's using even different cross-compilers.
Some old Ada1983-era compilers have a package called a.app that utilized a #-prefixed subset of Ada (interpreted at build-time) as a preprocessing language for generating Ada (to be then translated to machine code at compile-time). Rational's Verdix Ada Development System (VADS) appears to be the progenitor of a.app among several Ada compilers. Sun Microsystems, for example, derived the Ada SPARCompiler from VADS and thus also had a.app. This is not unlike the use of PL/I as the preprocessor of PL/I, which IBM did.
Chapter 2 is some documentation of what a.app looks like: http://dlc.sun.com/pdf/802-3641/802-3641.pdf
No, it does not.
If you really want one, there are ways to get one (Use C's, use a stand-alone one, etc.) However I'd argue against it. It was a purposeful design decision to not have one. The whole idea of a preprocessor is very un-Ada.
Most of what C's preprocessor is used for can be accomplished in Ada in other more reliable ways. The only major exception is in making minor changes to a source file for cross-platform support. Given how much this gets abused in a typical cross-platform C program, I'm still happy there's no support for it in Ada. Very few C/C++ developers can control themselves enough to keep the changes "minor". The result may work, but is often nearly impossible for a human to read.
The typical Ada way to accomplish this would be to put the different code in different files and use your build system to somehow choose between them at compile time. Make is plenty powerful enough to help you do this.