How does rpmbuild compute which fields to generate? - cmake

When comparing different RPM files, I've noticed that not all of them expose the same header tags. So there must be some logic that activates/deactivates creation of some of them.
One example is the build time and host. I've stumbled upon two RPM specs. Neither mentions anything that looks at all like a specification or switch to provide the information. Still, one of them is generated with Build Time and Build Host fields, the other isn't (I am not permitted to post either one).
I am aware of the new _buildhost macro. The RPM version used to generate both is insufficient to use it. Both packages get created from a list of Sources, as far as I can see. The one that doesn't display the build information gets built using CMake/CPack, the other uses rpmbuild directly, that's the only information I have about serious difference.
Both are defined as Group: AddOn. So far, I haven't found any remotely definite resources about what groups are valid, or their meanings. Only thing I found was the list of deprecated groups in Fedora. I'd be more interested in a list of supported ones, but wasn't successful so far.
Resources I've found until now (omitting the pointless ones):
Max RPM Package Building Page, RedHat blog-ish tutorial, The RPM build guide, The actual RPM tags documentation, The RPM packaging guide
Unfortunately, none of the above provide the information I'm looking for.
"Give a man a fish" question: How can I suppress creation of Build Time or Build Host in rpm 4.11, be it in spec syntax or in usage of rpmbuild?
"Teach a man how to fish" question: Is there any documentation about what header tags get created with which settings?

You can use Mock for building rpm (recommended anyway). and use config_opts['hostname'] = 'my.own.hostname'.
Mock will call sethostname() in the chroot.
This is the only way how to do it AFAIK.
rpmbuild should honor SOURCE_DATE_EPOCH - but I never used it.
You can set environment variable using:
config_opts['environment']['SOURCE_DATE_EPOCH'] = 'foo'

Related

lookup and verify sha256 for kotlinc in bazel

I try migrating from Maven to Bazel and want to use another (newer) version of kotlinc than the standard default.
I have started from the example given on https://github.com/bazelbuild/rules_kotlin#custom-kotlinc-distribution-and-version:
load("#io_bazel_rules_kotlin//kotlin:repositories.bzl", "kotlin_repositories", "kotlinc_version")
kotlin_repositories(
compiler_release = kotlinc_version(
release = "1.6.21", # just the numeric version
sha256 = "632166fed89f3f430482f5aa07f2e20b923b72ef688c8f5a7df3aa1502c6d8ba"
)
)
However, the link and the example explain only the syntax. It is nice that the Bazel configuration is so concise, but on the downside it is hard to see what the effect of this rule is.
How can I know/verify what binary for kotlinc will be download and where from? Can I figure out which other kotlinc releases are available, and more importantly, what the correct sha256 should be?
(Since this seems to be an official part of Bazel (loading from #io_bazel…), maybe there is a public directory (like https://mvnrepository.com/ for mvn, https://www.npmjs.com/ for npm, etc.) to check?)
Both the hash and the download location for the kotlinc that are apparently used by Bazel kotlin_rule can currently be found on GitHub:
https://github.com/JetBrains/kotlin/releases
hash for each release is printed at the end of the respective release note, next to kotlin-compiler-*.zip
download URL follows under "Asset", which for older releases may be collapsed (not immediately visible)
(as for the last part of the question: a unified directory for dependencies (artifacts, plugins, toolchains) seems to be unfeasible for a general-purpose build automation tool such as Bazel; it works for specific-purpose tools like Maven, NPM and so on; but all these ecosystems use quite different approaches and conventions, its probably too much hassle to provide that in a unified way for a system like Bazel; at least some of these bounded ecosystems already started to provide configuration snippets for Bazel)

How to tell CPack to use the FreeBSD generator?

I have found several interesting links talking about a CPack generator for FreeBSD.
I would like to generate FreeBSD packages; however, whenever I attempt to generate TXZ archives (as directed by the instructions), the generated package isn't compatible with the pkg utility on FreeBSD. They miss the manifest file.
Obviously, CPack is generating raw archives, not pkg-ready archives. I assume I must be missing a step.
However, none of the links above talk about any such step.
Therefore,
How can I tell CPack to generate a FreeBSD-ready package?
(Original author of that code here)
So, there's two things in play here:
you need to be on FreeBSD (so that you have libpkg, which is needed to do the building)
you need to build the devel/cmake package with OPTIONS CPACK (which is not the default)
So:
cd /usr/ports/devel/cmake
make configure and select CPACK
make && make install
Then #Tsyvarev's comment will be the right answer. For the record, the support was deemed experimental, the library API unstable, and the pkg authors have asked me to re-vamp the code to use the current libpkg API so they can drop the old one. Time, though, is the limiting factor.

Why are the source file names not human readable?

I installed Perl6 with rakudobrew and wanded to browse the installed files to see a list of hex-filenames in ~/.rakudobrew/moar-2018.08/install/share/perl6/site/sources as well as ~/.rakudobrew/moar-2018.08/install/share/perl6/sources/.
E.g.
> ls ~/.rakudobrew/moar-2018.08/install/share/perl6/sources/
09A0291155A88760B69483D7F27D1FBD8A131A35 AAC61C0EC6F88780427830443A057030CAA33846
24DD121B5B4774C04A7084827BFAD92199756E03 C57EBB9F7A3922A4DA48EE8FCF34A4DC55942942
2ACCA56EF5582D3ED623105F00BD76D7449263F7 C712FE6969F786C9380D643DF17E85D06868219E
51E302443A2C8FF185ABC10CA1E5520EFEE885A1 FBA542C3C62C08EB82C1F4D25BE7B4696F41B923
522BE83A1D821D8844E8579B32BA04966BAB7B87 FE7156F9200E802D3DB8FA628CF91AD6B020539B
5DD1D8B49C838828E13504545C427D3D157E56EC
The files contain the source of packages but this does not feel very accessible. What is the rational for that?
In Perl 6, the mechanism for loading modules and caching their compilations is pluggable. Rakudo Perl 6 comes with two main mechanisms for this.
One is a file-system based repository, and it's used with things like -Ilib. This resolves modules simply using paths on disk. Whenever a module loaded, it first has to check that the modules sources have not changed in order to re-compile them if so. This is ideal for development, however such checks take time. Furthermore, this doesn't allow for having multiple versions of the same module available and picking the one matching the specification in the use statement. Again, ideal for development, when you just want it to use your latest changes, but less so for installation of modules from the ecosystem.
The other is an installation repository. Here, specific versions of modules are installed and precompiled. It is expected that all interactions with such a repository will be done through the API or tools using the API (for example, zef locate Some::Module). It's assumed that once a specific version of a module has been installed, then it is immutable. Thus, no checks need to be done against source, and it can go straight to loaded the compiled version of the module.
Thus, the installation repository is not intended for direct human consumption. The SHA-1s are primarily an implementation convenience; an alternative scheme could have been used in return for a bit more effort (and may well be used in the future). However, the SHA-1s do also create the appearance of something that wasn't intended for direct manipulation - which is indeed the case: editing a source file in there will have no effect in the immediate, and probably confusing effects next time the compiler is upgraded to a new version.

How do you build Rebol's "Ren-C" branch with LibFFI support?

I'd like to access a dynamic library using FFI features in the Ren-C Rebol branch. I understand this is possible by building with LibFFI support enabled. What steps do I need to take to enable this?
I mainly use OS X for development, though would also like to be able to build it for use with Linux.
(Note: This is probably the kind of information that should be added to the Wiki, as it is not so much a language question but the kind of thing that is subject to change over time. But, answerable, so...)
If you're using the GNU make method to build (where make -f makefile.boot generates a makefile for you) then you should find some lines in there like:
TO_OS_BASE?= TO_OSX
TO_OS_NAME?= TO_OSX_X64
OS_ID?= 0.2.40
BIN_SUFFIX=
RAPI_FLAGS= -D__LP64__ -DENDIAN_LITTLE -DHAS_LL_CONSTS -O1 ...
HOST_FLAGS= -DREB_EXE -D__LP64__ -DENDIAN_LITTLE ...
Modify the RAPI_FLAGS and HOST_FLAGS lines at the beginning to add -DHAVE_LIBFFI_AVAILABLE. That (-D)efines a preprocessor directive to tell the code it's okay to generate calls to FFI, because you have it available for linking later.
Now to tell it where to find include files. There's a line for includes that should look like:
INCL ?= .
I= -I$(INCL) -I$S/include/ -I$S/codecs/ ...
To the tail of that you need to add something that will look like -I/usr/local/opt/libffi/lib/libffi-3.0.13/include, or similar. The actual directory will depend on where you have libffi on your system. On the OSX system I'm looking at, that has two files in it, ffi.h and ffitarget.h.
(Note: I'm afraid I don't know how these files got on this computer. They didn't ship with the OS, so they came from...somewhere. I don't generally develop on OSX--nor for that matter do I use this FFI. You'll have to consult your local FFI-on-OSX website, or perhaps for support contact Atronix Engineering) who added the FFI features to Rebol.)
Then it's necessary to tell it where you have libffi on your system. You'll find a CLIB line that is likely just CLIB= -lm. You'd change this for example to:
CLIB= -L/usr/local/opt/libffi/lib -lm -lffi
-lffi Tells it to look for the ffi (-l)ibrary, and -lxxx means it assumes the name of the library will be libxxx[something]. -L/usr/local/opt/libffi/lib tells it where to look for it. You'll have to figure out where (if anywhere) you have libffi, and if not get it. If you had it, the directory would have contents something like:
libffi-3.0.13
libffi.6.dylib
libffi.a
libffi.dylib
pkgconfig
I mainly use OS X for development, though would also like to be able to build it for use with Linux.
On Linux it's similar but generally much easier to get the library, as easy as sudo apt-get install libffi-dev. Same step for the RFLAGS and CFLAGS, and it should take care of the location automatically... so you can add just -lffi to CLIB.
Old notes from me:
cat steps-for-lib-ffi-osx
Install libfffi via homebrew
brew install libffi
Add /use/include/libffi to the -I in the generated makefile
Add /usr/local/Cellar/libffi/3.0.13/lib/libffi.a to the OBJS in the
generated makefile
The version 3.0.13 may vary

How do you make it so that cpack doesn't add required libraries to an RPM?

I'm trying to convert our build system at work over to cmake and have run into an interesting problem with the RPMs that it generates (via cpack): It automatically adds all of the dependencies that it thinks your RPM has to its list of required libraries.
In general, that's great, but in my case, it's catastrophic. Unfortunately, the development packages that we build end up getting installed with one our home-grown tool that uses rpm to install them in a separate RPM database from the system one. It's stupid, but I can't change it. What this means is that all of the system libraries that any normal library will rely on (like libc or libpthread) aren't in the RPM database that is being used with our development packages. So, if an RPM for one of our development packages lists system libraries as being required, then we can't install it, as rpm will think that they're not installed (since they're listed in the normal database rather than the one that it's being told to use when installing our packages). Our current build stuff handles this just fine, because it doesn't list any system libraries as dependencies in the RPMs, but cpack automatically populates the RPM's list of required libraries and puts the system libraries in there. I need a way to stop it from doing so.
I tried setting CPACK_RPM_PACKAGE_REQUIRES to "", but that has no effect. The RPM cpack generates still ends up with the system libraries listed as being required. All I can think of doing at this point is to copy the RPM cpack generator and hack it up to do what I want and use that instead of the standard one, but I'd prefer to avoid that. Does anyone have any idea how I could get cpack to stop populating the RPM with required libraries?
See bottom of
http://www.rpm.org/max-rpm/s1-rpm-depend-auto-depend.html
The autoreqprov Tag — Disable Automatic Dependency Processing
There may be times when RPM's automatic dependency processing is not desired. In these cases, the autoreqprov tag may be used to disable it. This tag takes a yes/no or 0/1 value. For example, to disable automatic dependency processing, the following line may be used:
AutoReqProv: no
EDIT:
In order to set this in cmake, you need to do set(CPACK_RPM_PACKAGE_AUTOREQPROV " no"). The extra space seems to be required in front of (or behind) the no in order for it to work. It seems that the RPM module for cpack has a bug which makes it so that it won't let you set some its variables to anything shorter than 3 characters long.
To add to Mark Lakata's answer above, there's a snapshot of the "Maximum RPM" doc
http://www.rpm.org/max-rpm-snapshot/s1-rpm-depend-auto-depend.html
that also adds:
The autoreq and autoprov tags can be used to disable automatic processing of requirements or "provides" only, respectively.
And at least with my version of CPackRPM, there seems to be similar variables you can set e.g.
set(CPACK_RPM_PACKAGE_AUTOREQ " no")
to only disable the automatic dependency processing of 'Requires'.