Meson equivalent of automake's CONFIG_STATUS_DEPENDENCIES? - meson-build

I have a project whose build options are complicated enough that I have to run several external scripts during the configuration process. If these scripts, or the files that they read, are changed, then configuration needs to be re-run.
Currently the project uses Autotools, and I can express this requirement using the CONFIG_STATUS_DEPENDENCIES variable. I'm experimenting with porting the build process to Meson and I can't find an equivalent. Is there currently an equivalent, or do I need to file a feature request?
For concreteness, a snippet of the meson.build in progress:
pymod = import('python')
python = pymod.find_installation('python3')
svf_script = files('scripts/compute-symver-floor')
svf = run_command(python, svf_script, files('lib'),
host_machine.system())
if svf.returncode() == 0
svf_results = svf.stdout().split('\n')
SYMVER_FLOOR = svf_results[0].strip()
SYMVER_FILE = svf_results[2].strip()
else
error(svf.stderr())
endif
# next line is a fake API expressing the thing I can't figure out how to do
meson.rerun_configuration_if_files_change(svf_script, SYMVER_FILE)

This is what custom_target() is for.
Minimal example
svf_script = files('svf_script.sh')
svf_depends = files('config_data_1', 'config_data_2') # files that svf_script.sh reads
svf = custom_target('svf_config', command: svf_script, depend_files: svf_depends, build_by_default: true, output: 'fake')
This creates a custom target named svf_config. When out of date, it runs the svf_script command. It depends on the files in the svf_depends file object, as well as
all the files listed in the command keyword argument (i.e. the script itself).
You can also specify other targets as dependencies using the depends keyword argument.
output is set to 'fake' to stop meson from complaining about a missing output keyword argument. Make sure that there is a file of the same name in the corresponding build directory to stop the target from always being considered out-of-date. Alternatively, if your configure script(s) generate output files, you could list them in this array.

Related

how to customize nix package builder script

The root problem is that nix uses autoconf to build libxml2-2.9.14 instead of cmake, and a consequence of this is that the cmake-configuration is missing (details like version number, platform specific dependencies like ws2_32 etc which are needed by my project cmake scripts). libxml2-2.9.14 already comes with cmake configuration and works nicely, except that nix does not use it (I guess they have their own reasons).
Therefore I would like to reuse the libxml2-2.9.14 nix package and override the builder script with my own (which is a trivial cmake dance).
Here is my attempt:
defaultPackage = forAllSystems (system:
let
pkgs = nixpkgsFor.${system};
cmakeLibxml = pkgs.libxml2.overrideAttrs( o: rec {
PROJECT_ROOT = builtins.getEnv "PWD";
builder = "${PROJECT_ROOT}/nix-libxml2-builder.sh";
});
in
Where nix-libxml2-builder.sh is my script calling cmake with all the options I need. It fails like this:
last 1 log lines:
> bash: /nix-libxml2-builder.sh: No such file or directory
For full logs, run 'nix log /nix/store/andvld0jy9zxrscxyk96psal631awp01-libxml2-2.9.14.drv'.
As you can see the issue is that PROJECT_ROOT does not get set (ignored) and I do not know how to feed my builder script.
What am I doing wrong?
Guessing from the use of defaultPackage in your snippet, you use flakes. Flakes are evaluated in pure evaluation mode, which means there is no way to influence the build from outside. Hence, getEnv always returns an empty string (unfortunately, this is not properly documented).
There is no need to refer to the builder script via $PWD. The whole flake is copied to the nix store so you can use your files directly. For example:
builder = ./nix-libxml2-builder.sh;
That said, the build will probably still fail, because cmake will not be available in the build environment. You would have to override nativeBuildInputs attribute to add cmake there.

In CMake how do I deal with generated source files which number and names are not known before?

Imagine a code generator which reads an input file (say a UML class diagram) and produces an arbitrary number of source files which I want to be handled in my project. (to draw a simple picture let's assume the code generator just produces .cpp files).
The problem is now the number of files generated depends on the input file and thus is not known when writing the CMakeLists.txt file or even in CMakes configure step. E.g.:
>>> code-gen uml.xml
generate class1.cpp..
generate class2.cpp..
generate class3.cpp..
What's the recommended way to handle generated files in such a case? You could use FILE(GLOB.. ) to collect the file names after running code-gen the first time but this is discouraged because CMake would not know any files on the first run and later it would not recognize when the number of files changes.
I could think of some approaches but I don't know if CMake covers them, e.g.:
(somehow) define a dependency from an input file (uml.xml in my example) to a variable (list with generated file names)
in case the code generator can be convinced to tell which files it generates the output of code-gen could be used to create a list of input file names. (would lead to similar problems but at least I would not have to use GLOB which might collect old files)
just define a custom target which runs the code generator and handles the output files without CMake (don't like this option)
Update: This question targets a similar problem but just asks how to glob generated files which does not address how to re-configure when the input file changes.
Together with Tsyvarev's answer and some more googling I came up with the following CMakeList.txt which does what I want:
project(generated)
cmake_minimum_required(VERSION 3.6)
set(IN_FILE "${CMAKE_SOURCE_DIR}/input.txt")
set_property(DIRECTORY APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS "${IN_FILE}")
execute_process(
COMMAND python3 "${CMAKE_SOURCE_DIR}/code-gen" "${IN_FILE}"
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
INPUT_FILE "${IN_FILE}"
OUTPUT_VARIABLE GENERATED_FILES
OUTPUT_STRIP_TRAILING_WHITESPACE
)
add_executable(generated main.cpp ${GENERATED_FILES})
It turns an input file (input.txt) into output files using code-gen and compiles them.
execute_process is being executed in the configure step and the set_property() command makes sure CMake is being re-run when the input file changes.
Note: in this example the code-generator must print a CMake-friendly list on stdout which is nice if you can modify the code generator. FILE(GLOB..) would do the trick too but this would for sure lead to problems (e.g. old generated files being compiled, too, colleagues complaining about your code etc.)
PS: I don't like to answer my own questions - If you come up with a nicer or cleaner solution in the next couple of days I'll take yours!

Bazel Checkers Support

What options do Bazel provide for creating new or extending existing targets that call C/C++-code checkers such as
qac
cppcheck
iwyu
?
Do I need to use a genrule or is there some other target rule for that?
Is https://bazel.build/versions/master/docs/be/extra-actions.html my only viable choice here?
In security critical software industries, such as aviation and automotive, it's very common to use the results of these calls to collect so called "metric reports".
In these cases, calls to such linters must have outputs that are further processed by the build actions of these metric report collectors. In such cases, I cannot find a useful way of reusing Bazel's "extra-actions". Ideas any one?
I've written something which uses extra actions to generate a compile_commands.json file used by clang-tidy and other tools, and I'd like to do the same kind of thing for iwyu when I get around to it. I haven't used those other tools, but I assume they fit the same pattern too.
The basic idea is to run an extra action which generates some output for each file (aka C/C++ compilation command), and then find all the output files afterwards (outside of Bazel) and aggregate them. A reasonably complete example is here for reference. Basically, the action listener (written in Python) decodes the extra action proto and extracts the source files, compiler options, etc:
action = extra_actions_base_pb2.ExtraActionInfo()
with open(argv[1], 'rb') as f:
action.MergeFromString(f.read())
cpp_compile_info = action.Extensions[extra_actions_base_pb2.CppCompileInfo.cpp_compile_info]
compiler = cpp_compile_info.tool
options = ' '.join(cpp_compile_info.compiler_option)
source = cpp_compile_info.source_file
output = cpp_compile_info.output_file
print('%s %s -c %s -o %s' % (compiler, options, source, output))
If you give the extra action an output template, then it can write that output to a file. If you give the output files distinctive names, you can find them all in the output tree and merge them together however you want.
A more sophisticated option is to use bazel query --output=proto and write code to calculate the extra action output filenames of the targets you're interested in from there. That requires writing more code, but you don't have problems with old output files in the output tree that are accidentally included when aggregating.
FWIW, Aspects are another possibility. However, I think extra actions work acceptably for this.

rpm spec file skeleton to real spec file

The aim is to have skeleton spec fun.spec.skel file which contains placeholders for Version, Release and that kind of things.
For the sake of simplicity I try to make a build target which updates those variables such that I transform the fun.spec.skel to fun.spec which I can then commit in my github repo. This is done such that rpmbuild -ta fun.tar does work nicely and no manual modifications of fun.spec.skel are required (people tend to forget to bump the version in the spec file, but not in the buildsystem).
Assuming the implied question is "How would I do this?", the common answer is to put placeholders in the file like ##VERSION## and then sed the file, or get more complicated and have autotools do it.
We place a version.mk file in our project directories which define environment variables. Sample content includes:
RELPKG=foopackage
RELFULLVERS=1.0.0
As part of a script which builds the RPM, we can source this file:
#!/bin/bash
. $(pwd)/Version.mk
export RELPKG RELFULLVERS
if [ -z "${RELPKG}" ]; then exit 1; fi
if [ -z "${RELFULLVERS}" ]; then exit 1; fi
This leaves us a couple of options to access the values which were set:
We can define macros on the rpmbuild command line:
% rpmbuild -ba --define "relpkg ${RELPKG}" --define "relfullvers ${RELFULLVERS}" foopackage.spec
We can access the environment variables using %{getenv:...} in the spec file itself (though this can be harder to deal with errors...):
%define relpkg %{getenv:RELPKG}
%define relfullvers %{getenv:RELFULLVERS}
From here, you simply use the macros in your spec file:
Name: %{relpkg}
Version: %{relfullvers}
We have similar values (provided by environment variables enabled through Jenkins) which provide the build number which plugs into the "Release" tag.
I found two ways:
a) use something like
Version: %(./waf version)
where version is a custom waf target
def version_fun(ctx):
print(VERSION)
class version(Context):
"""Printout the version and only the version"""
cmd = 'version'
fun = 'version_fun'
this checks the version at rpm build time
b) create a target that modifies the specfile itself
from waflib.Context import Context
import re
def bumprpmver_fun(ctx):
spec = ctx.path.find_node('oregano.spec')
data = None
with open(spec.abspath()) as f:
data = f.read()
if data:
data = (re.sub(r'^(\s*Version\s*:\s*)[\w.]+\s*', r'\1 {0}\n'.format(VERSION), data, flags=re.MULTILINE))
with open(spec.abspath(),'w') as f:
f.write(data)
else:
logs.warn("Didn't find that spec file: '{0}'".format(spec.abspath()))
class bumprpmver(Context):
"""Bump version"""
cmd = 'bumprpmver'
fun = 'bumprpmver_fun'
The latter is used in my pet project oregano # github

How to use the program's exit status at compile time?

This question is subsequent to my previous one: How to integrate such kind of source generator into CMake build chain?
Currently, the C source file is generated from XS in this way:
set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/${file_src_by_xs} PROPERTIES GENERATED 1)
add_custom_target(${file_src_by_xs}
COMMAND ${XSUBPP_EXECUTABLE} ${XSUBPP_EXTRA_OPTIONS} ${lang_args} ${typemap_args} ${file_xs} >${CMAKE_CURRENT_BINARY_DIR}/${file_src_by_xs}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
DEPENDS ${file_xs} ${files_xsh} ${_XSUBPP_TYPEMAP_FILES}
COMMENT "generating source from XS file ${file_xs}"
)
The GENERATED property let cmake don't check the existence of this source file at configure time, and add_custom_target let the xsubpp always re-run at each compile. The reason for always rerun is because xsubpp will generate an incomplete source file even if it fails, so there are possibility that the whole compiling continues with an incomplete source file.
I found it is time consuming to always re-run source generator and recompile it. So I want to have it re-run only when dependent XS files are modified. However, if I do so, the incomplete generated source file must be deleted.
So my question is: is there any way to remove the generated file, only when the program exit abnormally at compile time?
Or more generic: is there any way to run a command depending on another command's exit status at compile time?
You can always write a wrapper script in your favorite language, e.g. Perl or Ruby, that runs xsubpp and deletes the output file if the command failed. That way you can be sure that if it exists, it is correct.
In addition, I would suggest that you use the OUTPUT keyword of add_custom_command to tell CMake that the file is a result of executing the command. (And, if you do that, you don't have to set the GENERATED property manually.)
Inspired by #Lindydancer's answer, I achieved the purpose by multiple COMMANDs in one target, and it don't need to write an external wrapper script.
set(source_file_ok ${source_file}.ok)
add_custom_command(
OUTPUT ${source_file} ${source_file_ok}
DEPENDS ${xs_file} ${xsh_files}
COMMAND rm -f ${source_file_ok}
COMMAND xsubpp ...... >${source_file}
COMMAND touch ${source_file_ok}
)
add_library(${xs_lib} ${source_file})
add_dependencies(${xs_lib} ${source_file} ${source_file_ok})
The custom target has 3 commands. The OK file only exists when xsubpp is success, and this file is added as a dependency of the library. When xsubpp is not success, the dependency on the OK file will force the custom command to be run again.
The only flaw is cross-platform: not all OS have touch and rm, so the name of these two commands should be decided according to OS type.