is there any similar way in meson as `cmake -LAH`? - meson-build

I am debugging my meson build. I want to see all the cached variables as we do in cmake. Is there any way to do this? currently I have to go into the meson.build and add all the message() calls which is very inefficient. The python tricks vars() does not work either, but this is no surprising since meson.build is not python.

Run meson configure build/ on your existing build directory, without any other parameters, to see the current configuration and possible values. Adapt build to your preferred build directory. You'll get something like this:
Main project options:
Core options Current Value Possible Values Description
------------ ------------- --------------- -----------
auto_features auto [enabled, disabled, auto] Override value of all 'auto' features
backend ninja [ninja, vs, vs2010, vs2015, vs2017, vs2019, xcode] Backend to use
buildtype plain [plain, debug, debugoptimized, release, minsize, custom] Build type to use
debug false [true, false] Debug
default_library shared [shared, static, both] Default library type
install_umask 0022 [preserve, 0000-0777] Default umask to apply on permissions of installed files
layout mirror [mirror, flat] Build directory layout
optimization 0 [0, g, 1, 2, 3, s] Optimization level
strip false [true, false] Strip targets on install
unity off [on, off, subprojects] Unity build
unity_size 4 >=2 Unity block size
warning_level 3 [0, 1, 2, 3] Compiler warning level to use
werror true [true, false] Treat warnings as errors
wrap_mode default [default, nofallback, nodownload, forcefallback] Wrap mode
cmake_prefix_path [] List of additional prefixes for cmake to search
pkg_config_path [] List of additional paths for pkg-config to search
Backend options Current Value Possible Values Description
...
and at the end of that list, the options defined in your meson_options.txt:
...
Project options Current Value Possible Values Description
--------------- ------------- --------------- -----------
docs true [true, false] Build documentation
tests true [true, false] Build and run unit tests
tools true [true, false] Build conversion tools

Before building I always check meson_options.txt for possible options.
I noticed there is a file meson-info/intro-buildoptions.json under the build
directory after configuring with meson.
The options from meson_options.txt reappear in meson-info/intro-buildoptions.json with their configured values.
Since that file is in json format you will need to make it more readable if needed. This is a quick and dirty way that seems to work:
sed -e 's/},/&\n/g' meson-info/intro-buildoptions.json|sed -ne 's/^.*{"name": "\([^"]*\)", "value": \(\[[^]]*\]\|"[^"]*"\|[^,]*\).*$/\1 = \2/p'

Related

Meson equivalent of automake's CONFIG_STATUS_DEPENDENCIES?

I have a project whose build options are complicated enough that I have to run several external scripts during the configuration process. If these scripts, or the files that they read, are changed, then configuration needs to be re-run.
Currently the project uses Autotools, and I can express this requirement using the CONFIG_STATUS_DEPENDENCIES variable. I'm experimenting with porting the build process to Meson and I can't find an equivalent. Is there currently an equivalent, or do I need to file a feature request?
For concreteness, a snippet of the meson.build in progress:
pymod = import('python')
python = pymod.find_installation('python3')
svf_script = files('scripts/compute-symver-floor')
svf = run_command(python, svf_script, files('lib'),
host_machine.system())
if svf.returncode() == 0
svf_results = svf.stdout().split('\n')
SYMVER_FLOOR = svf_results[0].strip()
SYMVER_FILE = svf_results[2].strip()
else
error(svf.stderr())
endif
# next line is a fake API expressing the thing I can't figure out how to do
meson.rerun_configuration_if_files_change(svf_script, SYMVER_FILE)
This is what custom_target() is for.
Minimal example
svf_script = files('svf_script.sh')
svf_depends = files('config_data_1', 'config_data_2') # files that svf_script.sh reads
svf = custom_target('svf_config', command: svf_script, depend_files: svf_depends, build_by_default: true, output: 'fake')
This creates a custom target named svf_config. When out of date, it runs the svf_script command. It depends on the files in the svf_depends file object, as well as
all the files listed in the command keyword argument (i.e. the script itself).
You can also specify other targets as dependencies using the depends keyword argument.
output is set to 'fake' to stop meson from complaining about a missing output keyword argument. Make sure that there is a file of the same name in the corresponding build directory to stop the target from always being considered out-of-date. Alternatively, if your configure script(s) generate output files, you could list them in this array.

Generating compilation database for a single target with cmake

I'm using CMAKE_EXPORT_COMPILE_COMMANDS variable of cmake to obtain a json compilation database that I can then parse to identify the options that are given to the compiler for each source file. Now, the project I'm working on has several targets, and there are several occurrences of the source files that are used in different targets in the database, as can be shown by the example below:
f.c:
int main () { return MACRO; }
CMakeLists.txt:
cmake_minimum_required (VERSION 2.6)
project (Test)
add_executable(test1 f.c)
add_executable(test2 f.c)
target_compile_options(test1 PUBLIC -DMACRO=1)
target_compile_options(test2 PUBLIC -DMACRO=2)
running cmake . -DCMAKE_EXPORT_COMPILE_COMMANDS=1 will produce the following compile-commands.json file, with two entries for f.c, and no easy way to distinguish between them.
[
{
"directory": "/home/virgile/tmp/cmakefile",
"command": "/usr/bin/cc -DMACRO=1 -o CMakeFiles/test1.dir/f.c.o -c /home/virgile/tmp/cmakefile/f.c",
"file": "/home/virgile/tmp/cmakefile/f.c"
},
{
"directory": "/home/virgile/tmp/cmakefile",
"command": "/usr/bin/cc -DMACRO=2 -o CMakeFiles/test2.dir/f.c.o -c /home/virgile/tmp/cmakefile/f.c",
"file": "/home/virgile/tmp/cmakefile/f.c"
}
]
I'm looking for a way to specify that I'm only interested in e.g. target test1, as what you can do in build tool mode with --target, preferably without having to modify CMakeLists.txt, but this is not a major issue. What I'd like to avoid, on the other hand, is to read the argument of -o in the "command" entry and discriminate between the test1.dir and test2.dir path component.
Apparently this feature has been implemented in a merge-request about 3 months ago: https://gitlab.kitware.com/cmake/cmake/-/merge_requests/5651
From there I quote:
The new target property EXPORT_COMPILE_COMMANDS associated with the
existing global variable can be used to optionally configure targets for
their compile commands to be exported.
So it seems that you now can set a property on the respective targets in order to control whether or not they will be included in the generated DB.
This feature is part of cmake 3.20. Official docs: https://cmake.org/cmake/help/latest/prop_tgt/EXPORT_COMPILE_COMMANDS.html
Based on the docs the approach for limiting the compile DB to only a specific target should be to set CMAKE_EXPORT_COMPILE_COMMANDS to OFF and then use target_set_properties to set the EXPORT_COMPILE_COMMANDS property on the desired target to ON. E.g.
set(CMAKE_EXPORT_COMPILE_COMMANDS OFF)
...
set_target_properties(test1 PROPERTIES EXPORT_COMPILE_COMMANDS ON)
It seems that this is not supported at the moment. There is a request for this in the CMake issue tracker: https://gitlab.kitware.com/cmake/cmake/issues/19462

How can I concatenate multiple files into one in Meson?

I'm having trouble with a basic task in Meson where I need multiple files concatenated into one during build; basically:
cat *.txt > compiled.txt
or
cat foo.txt bar.txt baz.txt > compiled.txt
However whether I use custom_target(), generator() or any other function, Meson either can't find the compiled.txt or can't handle transitioning from multiple input files to a single output file.
Is there an easy way to achieve this?
Update:
Using run_command() I've managed to build compiled.txt and have it appear in the source directory. Ultimately I want compiled.txt (which I've listed in the gresource.xml) to be compiled by gnome.compile_resources(). Is there a way I can run this command and pass the file directly to that function to process?
Use custom_target(), pass the output to dependencies of gnome.compile_resources(). Note you will need a fairly recent glib for it to work.
See also: http://mesonbuild.com/Gnome-module.html#gnomecompile_resources
Moved solution from question to answer:
Solution:
I ended up not using gresources, but still needed this solution to concatenate files
cat_prog = find_program('cat')
parts_of_the_whole = files(
'part1.txt',
'part2.txt'
)
concat_parts = custom_target(
'concat-parts',
command: [ cat_prog, '#INPUT#' ],
capture: true,
input: parts_of_the_whole,
output: 'compiled.txt',
install_dir: appdatadir,
install: true,
build_by_default: true
)

How do I register "custom" Op (actually, from syntaxnet) with tensorflow serving?

I'm trying to serve a model exported from syntaxnet but the parser_ops are not available. The library file with the ops is found (out-of-tree) at:
../models/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_ops.so
I'm currently hacking the mnist_inference example, (because I don't know how to build anything out-of-tree with bazel), and the command I'm running is:
./bazel-out/local-opt/bin/tensorflow_serving/example/mnist_inference --port=9000 /tmp/model/00000001
And the error I'm getting is:
F tensorflow_serving/example/mnist_inference.cc:208] Check failed: ::tensorflow::Status::OK() == (bundle_factory->CreateSessionBundle(bundle_path, &bundle)) (OK vs. Not found: Op type not registered 'FeatureSize')
And FeatureSize is definitely defined in the parser_ops.so, I just don't know how to load it.
I'm not too familiar with TF (I work on Bazel) but it looks like you need to add parser_ops as a dependency of mnist_inference.
There is a right way to do this and a wrong (easier) way.
The Right Way
Basically you add syntaxnet as a dependency of the example you're building. Unfortunately, the syntax net project and the tensorflow serving project import tensorflow itself under different names, so you have to do some mangling of the serving WORKSPACE file to get this working.
Add the following to the tensorflow_serving WORKSPACE file:
local_repository(
name = "syntaxnet",
path = "/path/to/your/checkout/of/models/syntaxnet",
)
This allows you to refer to the targets in syntaxnet from the tensorflow project (by prefixing them with "#syntaxnet"). Unfortunately, as mentioned above, you also have to get all of syntaxnet's external dependencies into the WORKSPACE file, which is annoying. You can test out if it's working with bazel build #syntaxnet//syntaxnet:parser_ops_cc.
Once you've done that, then add the cc_library #syntaxnet//syntaxnet:parser_ops_cc (parser_ops.so is a cc_binary, which can't be used as a dependency) to mnist_inference's deps:
deps = [
"#syntaxnet//syntaxnet:parser_ops_cc",
"#grpc//:grpc++",
...
Note that this still won't quite work: parser_ops_cc is a private target in syntaxnet (so it can't be depended on from outside its package) but you could add an attribute to it like visibility = ["//visibility:public"] if you're just trying things out:
cc_library(
name = "parser_ops_cc",
srcs = ["ops/parser_ops.cc"],
visibility = ["//visibility:public"]
...
The Wrong Way
You have a .so, which you can add a src file for your binary. Add the directory it's in as a new_local_repository() and add it to srcs in the BUILD file.
WORKSPACE file:
new_local_repository(
name = "hacky_syntaxnet",
path = "/path/to/syntaxnet/bazel-out/local-opt/bin/syntaxnet",
build_file_content = """
exports_files(glob(["*"])) # Make all of the files available.
""",
)
BUILD file:
srcs = [
"mnist_inference.cc",
"#hacky_syntaxnet//:parser_ops.so"
],

rpm spec file skeleton to real spec file

The aim is to have skeleton spec fun.spec.skel file which contains placeholders for Version, Release and that kind of things.
For the sake of simplicity I try to make a build target which updates those variables such that I transform the fun.spec.skel to fun.spec which I can then commit in my github repo. This is done such that rpmbuild -ta fun.tar does work nicely and no manual modifications of fun.spec.skel are required (people tend to forget to bump the version in the spec file, but not in the buildsystem).
Assuming the implied question is "How would I do this?", the common answer is to put placeholders in the file like ##VERSION## and then sed the file, or get more complicated and have autotools do it.
We place a version.mk file in our project directories which define environment variables. Sample content includes:
RELPKG=foopackage
RELFULLVERS=1.0.0
As part of a script which builds the RPM, we can source this file:
#!/bin/bash
. $(pwd)/Version.mk
export RELPKG RELFULLVERS
if [ -z "${RELPKG}" ]; then exit 1; fi
if [ -z "${RELFULLVERS}" ]; then exit 1; fi
This leaves us a couple of options to access the values which were set:
We can define macros on the rpmbuild command line:
% rpmbuild -ba --define "relpkg ${RELPKG}" --define "relfullvers ${RELFULLVERS}" foopackage.spec
We can access the environment variables using %{getenv:...} in the spec file itself (though this can be harder to deal with errors...):
%define relpkg %{getenv:RELPKG}
%define relfullvers %{getenv:RELFULLVERS}
From here, you simply use the macros in your spec file:
Name: %{relpkg}
Version: %{relfullvers}
We have similar values (provided by environment variables enabled through Jenkins) which provide the build number which plugs into the "Release" tag.
I found two ways:
a) use something like
Version: %(./waf version)
where version is a custom waf target
def version_fun(ctx):
print(VERSION)
class version(Context):
"""Printout the version and only the version"""
cmd = 'version'
fun = 'version_fun'
this checks the version at rpm build time
b) create a target that modifies the specfile itself
from waflib.Context import Context
import re
def bumprpmver_fun(ctx):
spec = ctx.path.find_node('oregano.spec')
data = None
with open(spec.abspath()) as f:
data = f.read()
if data:
data = (re.sub(r'^(\s*Version\s*:\s*)[\w.]+\s*', r'\1 {0}\n'.format(VERSION), data, flags=re.MULTILINE))
with open(spec.abspath(),'w') as f:
f.write(data)
else:
logs.warn("Didn't find that spec file: '{0}'".format(spec.abspath()))
class bumprpmver(Context):
"""Bump version"""
cmd = 'bumprpmver'
fun = 'bumprpmver_fun'
The latter is used in my pet project oregano # github