Is there a way in CMake to tell Visual Studio to compile parallel runtime decencies that are not directly compile dependent? - cmake

I want to set up a Visual Studio solution configuration through cmake where Application A and Application B depends on C, D, and E shared libraries that are loaded at runtime.
These must all be compiled before either A or B can run, but none of these are compile dependent on each other in any way. They may share common dependencies further down (like a shared interface that came from yet another library "Z") but this should not have an impact about what I am describing.
All of A through E should be allowed to be compiled in parallel for the best build time result. The libraries only requirement is that they have been compiled and be up to date with code changes when A or B is built and launched for debugging.
Is this type of relationship possible though cmake?
I've tried add_dependency, but this seems to stop A or B from compiling until after all of C D and E are finished, which wastes time.

Related

CMake precompiled headers issue with mixed C/C++ project

Environment
cmake version 3.21.1 running on macOS 10.15.7, clang version string Apple clang version 12.0.0 (clang-1200.0.32.29).
Introduction
I have a project for a library written in C, however its unit tests are written in C++ using Google Test. The library implements different algorithms, with one target for each different algorithm. For each target, say A, there is a corresponding A_tests target. Say there are 5 targets, A through E.
Due to ever-increasing build times, I'm trying to add Google Test's "gtest/gtest.h" header as a precompiled header, evidently only for C++. To avoid repeatedly recompiling the same header, I added the following entry to one of my targets, say A_tests:
target_precompiled_headers(A_tests PRIVATE [["gtest/gtest.h"]])
Note that A_tests is composed entirely of C++ files.
For each of the other targets (X = B, C, D, E), I added the following:
target_precompiled_headers(X_tests REUSE_FROM A_tests)
The issue
Now this works fine for, say, X = B and C, which are also pure C++ targets. However, D_tests has a C file in it in addition to the various C++ files. When configuring the project with CMake, I get the following error:
CMake Error in CMakeLists.txt:
Unable to resolve full path of PCH-header
'/Users/.../my-lib/build/CMakeFiles/A_tests.dir/cmake_pch.h'
assigned to target D_tests, although its path is supposed to be
known!
Indeed, at my-lib/build/CMakeFiles/A_tests.dir, there is a cmake_pch.hxx file but not a cmake_pch.h file.
Root cause
Eventually, after an investigation that involved running CMake under a debugger, I found out it had to with the presence of a C file in D_tests, along with the lack of C files in A_tests. (Note: the PCH must be compiled inside A_tests, since A is the only mandatory target in the library -- B through E may all be disabled through CMake options.)
Attempts to fix
My first attempt was to add a dummy C file to A_tests to ensure that a C PCH is created as well. Although this ensures the error goes away, this is the content of the cmake_pch.h (note this is the C version of the file, as opposed to the separate C++ version which is cmake_pch.hxx):
/* generated by CMake */
#pragma clang system_header
#include "gtest/gtest.h"
I can't imagine any good things will come out of force-including a C++ header in C files (even if that's not an error, it will at the very least slow down the compilation by including the PCH in files where it makes no sense to do so).
After some more experimentation, I got an acceptable result by changing the target_precompiled_headers() entry in A_tests to the following, using generator expressions:
target_precompile_headers(A_tests PRIVATE
"$<$<COMPILE_LANGUAGE:CXX>:\"gtest/gtest.h\">"
"$<$<COMPILE_LANGUAGE:C>:<stddef.h$<ANGLE-R>>")
In principle this solution is acceptable -- having a C PCH with stddef.h is not really a problem, since it's a small and harmless header, and moreover there are very few C files in the X_tests targets and anyway C compilation is blazingly-fast.
However, I'm still bothered by the fact that I must add some C header to prevent an error. I even tried changing the relevant part of the statement above to "$<$<COMPILE_LANGUAGE:C>:>", but then I get a different error: target_precompile_headers called with invalid arguments.
The question
Can I modify my script to communicate to CMake that, for target D_tests, only a C++ PCH should be used, even though there are C files in that target?
Failing that, is it possible to create an empty C PCH, say by a suitable modification of the generator expression above?

How to reuse Fortran modules without copying source or creating libraries

I'm having trouble understanding if/how to share code among several Fortran projects without building libraries or duplicating source code.
I am using Eclipse/Photran with the Intel compiler (ifort) on a linux system, but I believe I'm having a bigger conceptual problem with modules than with the specific tools.
Here's a simple example: In ~/workspace/cow I have a source directory (src) containing cow.f90 (the PROGRAM) and two modules m_graze and m_moo in m_graze.f90 and m_moo.f90, respectively. This project builds and links properly to create the executable 'cow'. The executable and modules (m_graze.mod and m_moo.mod) are stored in ~/workspace/cow/Debug and object files are stored under ~/workspace/cow/Debug/src
Later, I create ~/workplace/sheep and have src/sheep.f90 as the program and src/m_baa.f90 as the module m_baa. I want to 'use m_graze, only: ruminate' in sheep.f90 to get access to the ruminate() subroutine. I could just copy m_graze.f90 but that could lead to code getting out of sync and doesn't take into account any dependencies m_graze might have. For these reasons, I'd rather leave m_graze in the cow project and compile and link sheep.f90 against it.
If I try to compile the sheep project, I'll get an error like:
error #7002: Error in opening the compiled module file. Check INCLUDE paths. [M_GRAZE]
Under Properties:Project References for sheep, I can select the cow project. Under Properties:Fortran Build:Settings:Intel Compiler:Preprocessor I can add ~/workspace/cow/Debug (location of the module files) to the list of include directories so the compiler now finds the cow modules and compiles sheep.f90. However the linker dies with something like:
Building target: sheep
Invoking: Intel(R) Fortran Linker
ifort -L/home/me/workspace/cow/Debug -o "sheep" ./src/sheep.o
./src/sheep.o: In function `sheep':
/home/me/workspace/sheep/src/sheep.f90:11: undefined reference to `m_graze_mp_ruminate_'
This would normally be solved by adding libraries and library paths to the linker settings except there are no appropriate libraries to link to (this is Fortran, not C.)
The cow project was perfectly capable of compiling and linking together cow.f90, m_graze.f90 and m_moo.f90 into an executable. Yet while the sheep project can compile sheep.f90 and m_baa.f90 and can find the module m_graze.mod, it can't seem to find the symbols for m_graze even though all the requisite information is present on the system for it to do so.
It would seem to be an easy matter of configuration to get the linker portion of ifort to find the missing pieces and put them together but I have no idea what magic words need to be entered where in the Photran UI to make this happen.
I confess an utter lack of interest and competence in C and the C build process and I'd rather avoid the diversion of creating libraries (.a or .so) unless that's the only way to make this work.
Ultimately, I'm looking for a pure Fortran solution to this problem so I can keep a single copy of the source code and don't have to manually maintain a pile of custom Makefiles.
So can this be done?
Apologies if this has already been documented somewhere; Google is only showing me simple build examples, how to create modules, and how to link with existing libraries. There don't seem to be (m)any examples of code reuse with modules that don't involve duplicating source code.
Edit
As respondents have pointed out, the .mod files are necessary but not sufficient; either object code (in the form of m_graze.o) or static or shared libraries must be specified during the linking phase. The .mod files describe the interface to the object code/library but both are necessary to build the final executable.
For an oversimplified toy problem such as this, that's sufficient to answer the question as posed.
In a larger project with more complex dependencies (in my case, 80+KLOC of F90 linking to the MKL version of LAPACK95), the IDE or toolchain may lack sufficient automatic or user-interface facilities to make sharing a single canonical set of source files a viable strategy. The choice seems to be between risking duplicate source files getting out of sync, giving up many of the benefits of an IDE (i.e. avoiding manual creation of make/CMake/SCons files), or, in all likelihood, both. While a revision control system and good code organization can help, it's clear that sharing a single canonical set of source files among projects is far from easy given the current state of Eclipse.
Some background which I suspect you already know: Typically (including ifort) compiling the source code for a Fortran module results in two outputs - a "mod" file that contains a description of the Fortran entities that the module defines that the compiler needs to find whenever it sees a USE statement for the module, and object code for the linker that implements the procedures and variable storage, etc., that the module defines.
Your first error (the one you solved) is because the compiler couldn't find the mod file.
The second error is because the linker hasn't been told about the object code that implements the stuff that was in the source file with the module. I'm not an Eclipse user by any means, but a brute force way of specifying that is just to add the object file (xxxxx/Debug/m_graze.o) as an additional linker option (Fortran Build > Settings, under Intel Fortran Linker > Command Line). (Other tool chains have explicit "additional object file" properties for their link stage - there may well be a better way of doing this for the Intel chain.)
For more involved examples you would typically create a library out of the shared code. That's not really C specific, the only Fortran aspect is that the libraries archive of object code needs to be provided alongside the mod files that the Fortran compiler generates.
Yes the object code must be provided. E.g., when you install libnetcdf-dev in Debian (apt-get install libnetcdf-dev), there is a /usr/include/netcdf.mod file that is included.
You can now use all netcdf routines in your Fortran code. E.g.,
program main
use netcdf
...
end
but you'll have link to the netcdf shared (or static) library, i.e.,
gfortran -I/usr/include/ main.f90 -lnetcdff
However, as user MSB mentioned the mod file can only be used by gfortran that comes with the distribution (apt-get install gfortran). If you want to use any other compiler (even a different version that you may have installed yourself) then you'll have to build netcdf yourself using that particular compiler.
So creating a library is not a bad solution.

Jenkins and MSBuild and Copy Artifacts Plugin and proper usage for multiple projects

My problem boils down to this base case: our solution has two projects, A and B, which project C then includes into its build process.
When someone pushes to project A or B, Jenkins builds the project using MSBuild, archives the artifact, and then kicks off a build of C.
When C begins, it has four "Copy Artifacts" tasks that need to run: first, it copies the artifacts from A into .\A\obj\Release\, then it copies the same artifacts into .\A\bin\Release. Then it repeats for project B. Then it builds itself.
That's right: for each project C relies on, Copy Artifacts has to be run twice, or else MSBuild detects that something is out of date and the whole thing is built from scratch.
Is there an easier way to do this? Can I pass a parameter to MSBuild (or configure the .csproj) that says "only build this project, assume the other project binaries are up-to-date, regardless of the timestamp"? Is there a better plugin that will take care of this for me?
This is really annoying (and confusing) in our real-world case where we've got almost 20 different projects, with layered dependencies between them.
You can pass BuildProjectReferences=false into MSBuild to tell it to skip auto-building references and just build the project indicated.

include_libraries for a single target only

In project with many multiple targets, I wish to add include libraries for a certain target only. I don't want to slow down compilation by adding many include libraries to all projects, and I do want that if I did not specify a required library as a dependency to the executable, it will fail in compile time, and not only in link time.
Is there any way to do that in CMake? Something like target_link_libraries, but only for include directories?
First of all, I would not bother with a potential increase in compilation time, because you added many include-directories. Of course, you should test if it really is an issue.
You may try to specify the COMPILE_FLAGS property directly on the source files, but this is likely not cross-platform and needs to be done on each source-file.
Alternatively, consider splitting up your project in subdirectories and write a separate CMakeLists.txt for each subdir. In that case, the include_directories() call is limited to the scope of the current project (and its subprojects) and you would have more fine-grained control over each project.
There may be an issue with requiring failure at compilation time: E.g. when using static libraries A, depending on B, depending on C: When someones links an exe/dll to A, the libs B and C are needed, but this is not necessarily detectable at compile-time... and difficult to solve generically with CMake.

DLL and LIB files - what and why?

I know very little about DLL's and LIB's other than that they contain vital code required for a program to run properly - libraries. But why do compilers generate them at all? Wouldn't it be easier to just include all the code in a single executable? And what's the difference between DLL's and LIB's?
There are static libraries (LIB) and dynamic libraries (DLL) - but note that .LIB files can be either static libraries (containing object files) or import libraries (containing symbols to allow the linker to link to a DLL).
Libraries are used because you may have code that you want to use in many programs. For example if you write a function that counts the number of characters in a string, that function will be useful in lots of programs. Once you get that function working correctly you don't want to have to recompile the code every time you use it, so you put the executable code for that function in a library, and the linker can extract and insert the compiled code into your program. Static libraries are sometimes called 'archives' for this reason.
Dynamic libraries take this one step further. It seems wasteful to have multiple copies of the library functions taking up space in each of the programs. Why can't they all share one copy of the function? This is what dynamic libraries are for. Rather than building the library code into your program when it is compiled, it can be run by mapping it into your program as it is loaded into memory. Multiple programs running at the same time that use the same functions can all share one copy, saving memory. In fact, you can load dynamic libraries only as needed, depending on the path through your code. No point in having the printer routines taking up memory if you aren't doing any printing. On the other hand, this means you have to have a copy of the dynamic library installed on every machine your program runs on. This creates its own set of problems.
As an example, almost every program written in 'C' will need functions from a library called the 'C runtime library, though few programs will need all of the functions. The C runtime comes in both static and dynamic versions, so you can determine which version your program uses depending on particular needs.
Another aspect is security (obfuscation). Once a piece of code is extracted from the main application and put in a "separated" Dynamic-Link Library, it is easier to attack, analyse (reverse-engineer) the code, since it has been isolated. When the same piece of code is kept in a LIB Library, it is part of the compiled (linked) target application, and this thus harder to isolate (differentiate) that piece of code from the rest of the target binaries.
One important reason for creating a DLL/LIB rather than just compiling the code into an executable is reuse and relocation. The average Java or .NET application (for example) will most likely use several 3rd party (or framework) libraries. It is much easier and faster to just compile against a pre-built library, rather than having to compile all of the 3rd party code into your application. Compiling your code into libraries also encourages good design practices, e.g. designing your classes to be used in different types of applications.
A DLL is a library of functions that are shared among other executable programs. Just look in your windows/system32 directory and you will find dozens of them. When your program creates a DLL it also normally creates a lib file so that the application *.exe program can resolve symbols that are declared in the DLL.
A .lib is a library of functions that are statically linked to a program -- they are NOT shared by other programs. Each program that links with a *.lib file has all the code in that file. If you have two programs A.exe and B.exe that link with C.lib then each A and B will both contain the code in C.lib.
How you create DLLs and libs depend on the compiler you use. Each compiler does it differently.
One other difference lies in the performance.
As the DLL is loaded at runtime by the .exe(s), the .exe(s) and the DLL work with shared memory concept and hence the performance is low relatively to static linking.
On the other hand, a .lib is code that is linked statically at compile time into every process that requests. Hence the .exe(s) will have single memory, thus increasing the performance of the process.