The aim is to have skeleton spec fun.spec.skel file which contains placeholders for Version, Release and that kind of things.
For the sake of simplicity I try to make a build target which updates those variables such that I transform the fun.spec.skel to fun.spec which I can then commit in my github repo. This is done such that rpmbuild -ta fun.tar does work nicely and no manual modifications of fun.spec.skel are required (people tend to forget to bump the version in the spec file, but not in the buildsystem).
Assuming the implied question is "How would I do this?", the common answer is to put placeholders in the file like ##VERSION## and then sed the file, or get more complicated and have autotools do it.
We place a version.mk file in our project directories which define environment variables. Sample content includes:
RELPKG=foopackage
RELFULLVERS=1.0.0
As part of a script which builds the RPM, we can source this file:
#!/bin/bash
. $(pwd)/Version.mk
export RELPKG RELFULLVERS
if [ -z "${RELPKG}" ]; then exit 1; fi
if [ -z "${RELFULLVERS}" ]; then exit 1; fi
This leaves us a couple of options to access the values which were set:
We can define macros on the rpmbuild command line:
% rpmbuild -ba --define "relpkg ${RELPKG}" --define "relfullvers ${RELFULLVERS}" foopackage.spec
We can access the environment variables using %{getenv:...} in the spec file itself (though this can be harder to deal with errors...):
%define relpkg %{getenv:RELPKG}
%define relfullvers %{getenv:RELFULLVERS}
From here, you simply use the macros in your spec file:
Name: %{relpkg}
Version: %{relfullvers}
We have similar values (provided by environment variables enabled through Jenkins) which provide the build number which plugs into the "Release" tag.
I found two ways:
a) use something like
Version: %(./waf version)
where version is a custom waf target
def version_fun(ctx):
print(VERSION)
class version(Context):
"""Printout the version and only the version"""
cmd = 'version'
fun = 'version_fun'
this checks the version at rpm build time
b) create a target that modifies the specfile itself
from waflib.Context import Context
import re
def bumprpmver_fun(ctx):
spec = ctx.path.find_node('oregano.spec')
data = None
with open(spec.abspath()) as f:
data = f.read()
if data:
data = (re.sub(r'^(\s*Version\s*:\s*)[\w.]+\s*', r'\1 {0}\n'.format(VERSION), data, flags=re.MULTILINE))
with open(spec.abspath(),'w') as f:
f.write(data)
else:
logs.warn("Didn't find that spec file: '{0}'".format(spec.abspath()))
class bumprpmver(Context):
"""Bump version"""
cmd = 'bumprpmver'
fun = 'bumprpmver_fun'
The latter is used in my pet project oregano # github
Related
The root problem is that nix uses autoconf to build libxml2-2.9.14 instead of cmake, and a consequence of this is that the cmake-configuration is missing (details like version number, platform specific dependencies like ws2_32 etc which are needed by my project cmake scripts). libxml2-2.9.14 already comes with cmake configuration and works nicely, except that nix does not use it (I guess they have their own reasons).
Therefore I would like to reuse the libxml2-2.9.14 nix package and override the builder script with my own (which is a trivial cmake dance).
Here is my attempt:
defaultPackage = forAllSystems (system:
let
pkgs = nixpkgsFor.${system};
cmakeLibxml = pkgs.libxml2.overrideAttrs( o: rec {
PROJECT_ROOT = builtins.getEnv "PWD";
builder = "${PROJECT_ROOT}/nix-libxml2-builder.sh";
});
in
Where nix-libxml2-builder.sh is my script calling cmake with all the options I need. It fails like this:
last 1 log lines:
> bash: /nix-libxml2-builder.sh: No such file or directory
For full logs, run 'nix log /nix/store/andvld0jy9zxrscxyk96psal631awp01-libxml2-2.9.14.drv'.
As you can see the issue is that PROJECT_ROOT does not get set (ignored) and I do not know how to feed my builder script.
What am I doing wrong?
Guessing from the use of defaultPackage in your snippet, you use flakes. Flakes are evaluated in pure evaluation mode, which means there is no way to influence the build from outside. Hence, getEnv always returns an empty string (unfortunately, this is not properly documented).
There is no need to refer to the builder script via $PWD. The whole flake is copied to the nix store so you can use your files directly. For example:
builder = ./nix-libxml2-builder.sh;
That said, the build will probably still fail, because cmake will not be available in the build environment. You would have to override nativeBuildInputs attribute to add cmake there.
I need to provide to the binary built with meson build system some git information regarding branch and version used:
git describe --tags
git descibe --help
the problem I have is how retrieve this information with meson,
with the make build I use the following instruction:
GITREF = $(shell git describe --all)
LIB1_VER = $(shell cd ../../lib1;git describe --tags;cd - &>NULL)
so in meson for GITREF I've tried
info_dep = vcs_tag(command : ['git descibe --all'],
input : 'infoBuild.h.in',
output : 'infoBuild.h',
replace_string : 'BRANCHNAME')
where infobuild.h.in is:
#define GITREF "BRANCHNAME"
but when I go to compile with ninja I got
/usr/local/bin/meson --internal vcstagger ../../src/prog1/info/infoBuild.h.in src/prog1/info/infoBuild.h 1.1.0 /home/mariano/clonesIntel/projMes/src/prog1/info BRANCHNAME '(.*)' '/home/mariano/clonesIntel/ProjMes/src/prog1/info/git describe --all'
but I don't find any infoBuild.h,
more over for the LIB1_VER is more difficult because it is in an external folder,
I could overcome this issue with a bash script but is there a way to retrieve both information in meson build?
I see an immediate problem in that it's going to try to run a command 'git describe --all', which is not what you want, as meson will be sure to escape the spaces in your shell so that it treats that as the a single filename, you want ['git', 'describe', '--all']. Of course, that could just be a type in your example.
One option you might consider is a run_command and configure_file, which is a command run at compile time, and produces a result object that you can get string values from. The disadvantage of this compared to vcs_tag (or a custom_target) is that it happens at configure time, as opposed to build time, so you need to reconfigure to update your tags:
res = run_command(['git', 'describe', '--all'], capture : true, check : true)
describe = res.stdout()
version_h = configure_file(
input : 'version.h.in',
output : 'version.h',
configuration : {'PLACEHOLDER' : describe}
)
I have a project whose build options are complicated enough that I have to run several external scripts during the configuration process. If these scripts, or the files that they read, are changed, then configuration needs to be re-run.
Currently the project uses Autotools, and I can express this requirement using the CONFIG_STATUS_DEPENDENCIES variable. I'm experimenting with porting the build process to Meson and I can't find an equivalent. Is there currently an equivalent, or do I need to file a feature request?
For concreteness, a snippet of the meson.build in progress:
pymod = import('python')
python = pymod.find_installation('python3')
svf_script = files('scripts/compute-symver-floor')
svf = run_command(python, svf_script, files('lib'),
host_machine.system())
if svf.returncode() == 0
svf_results = svf.stdout().split('\n')
SYMVER_FLOOR = svf_results[0].strip()
SYMVER_FILE = svf_results[2].strip()
else
error(svf.stderr())
endif
# next line is a fake API expressing the thing I can't figure out how to do
meson.rerun_configuration_if_files_change(svf_script, SYMVER_FILE)
This is what custom_target() is for.
Minimal example
svf_script = files('svf_script.sh')
svf_depends = files('config_data_1', 'config_data_2') # files that svf_script.sh reads
svf = custom_target('svf_config', command: svf_script, depend_files: svf_depends, build_by_default: true, output: 'fake')
This creates a custom target named svf_config. When out of date, it runs the svf_script command. It depends on the files in the svf_depends file object, as well as
all the files listed in the command keyword argument (i.e. the script itself).
You can also specify other targets as dependencies using the depends keyword argument.
output is set to 'fake' to stop meson from complaining about a missing output keyword argument. Make sure that there is a file of the same name in the corresponding build directory to stop the target from always being considered out-of-date. Alternatively, if your configure script(s) generate output files, you could list them in this array.
I'm having trouble with a basic task in Meson where I need multiple files concatenated into one during build; basically:
cat *.txt > compiled.txt
or
cat foo.txt bar.txt baz.txt > compiled.txt
However whether I use custom_target(), generator() or any other function, Meson either can't find the compiled.txt or can't handle transitioning from multiple input files to a single output file.
Is there an easy way to achieve this?
Update:
Using run_command() I've managed to build compiled.txt and have it appear in the source directory. Ultimately I want compiled.txt (which I've listed in the gresource.xml) to be compiled by gnome.compile_resources(). Is there a way I can run this command and pass the file directly to that function to process?
Use custom_target(), pass the output to dependencies of gnome.compile_resources(). Note you will need a fairly recent glib for it to work.
See also: http://mesonbuild.com/Gnome-module.html#gnomecompile_resources
Moved solution from question to answer:
Solution:
I ended up not using gresources, but still needed this solution to concatenate files
cat_prog = find_program('cat')
parts_of_the_whole = files(
'part1.txt',
'part2.txt'
)
concat_parts = custom_target(
'concat-parts',
command: [ cat_prog, '#INPUT#' ],
capture: true,
input: parts_of_the_whole,
output: 'compiled.txt',
install_dir: appdatadir,
install: true,
build_by_default: true
)
This is a bit of a follow-up question to this one.
Say I've managed to extend the Integer class with a new method 'square'. Now I want to use it.
Calling the new method from within the file is easy:
Integer extend [
square [
| r |
r := self * self.
^r
]
]
x := 5 square.
x printNl.
Here, I can just call $ gst myprogram.st in bash and it'll print 25. But what if I want to use the method from inside the GNU smalltalk application? Like this:
$ gst
st> 5 square
25
st>
This may have to do with images, I'm not sure. This tutorial says I can edit the ~/.st/kernel/Builtins.st file to edit what files are loaded into the kernel, but I have no such file.
I would not edit what's loaded into the kernel. To elaborate on my comment, one way of loading previously created files into the environment for GNU Smalltalk, outside of using image files, is to use packages.
A sample package.xml file, which defines the package per the documentation, would look like:
<package>
<name>MyPackage</name>
<!-- Include any prerequisite packages here, if you need them -->
<prereq>PrequisitePackageName</prereq>
<filein>Foo.st</filein>
<filein>Bar.st</filein>
</package>
A sample Makefile for building the package might look like:
# MyPackage makefile
#
PACKAGE_DIR = ~/.st
PACKAGE_SPEC = package.xml
PACKAGE_FILE = $(PACKAGE_DIR)/MyPackage.star
PACKAGE_SRC = \
Foo.st \
Bar.st
$(PACKAGE_FILE): $(PACKAGE_SRC) $(PACKAGE_SPEC)
gst-package -t ~/.st $(PACKAGE_SPEC)
With the above files in your working directory containing Foo.st and Bar.st, you can do a make and it will build the .star package file and put it in ~/.st (where gst will go looking for packages as the first place to look). When you run your environment, you can then use PackageLoader to load it in:
$ gst
GNU Smalltalk ready
st> PackageLoader fileInPackage: 'MyPackage'
Loading package PrerequisitePackage
Loading package MyPackage
PackageLoader
st>
Then you're ready to rock and roll... :)