How to create RPM subpackages using the same paths for different envs? - packaging

I would like to use a rpm to build subpackages for different environments (live,testing,developer) but for the same files, so having a package called name-config-live, one called name-config-testing and one called name-config-developer and in them to have the same paths but each with the configs corresponding to the environment it's named after.
as an example
let's say on all environments I have a file called /etc/name.conf and on testing I want it to contain "1", on development "2" and on live "3". Is it possible to do this in the same spec since the subpackage generation only happens last not in the order I enter it. ( and hopefully not with %post -n )
I tried using BuildRoot but it seems that's a global attribute

I don't think there's a native way; I would do a %post like you had noted.
However, I would do this (similar to something I do with an internal-only package I develop for work):
Three separate files /etc/name.conf-developer, /etc/name.conf-live, etc.
Have all three packages provide a virtual package, e.g. name-config
Have main package require name-config
This will make rpm, yum, or whatever require at least one be installed in the same transaction
Have all three packages conflict with each other
Have each config package's %post (and possibly %verify) symlink /etc/name.conf to the proper config
This also helps show the user what is happening
Cons:
It's a little hackish
rpm --whatprovides /etc/name.conf will say it is not owned by any package

Related

Yocto: what is the best practice for having a recipe install config files for another application, but only if said application is to be installed?

What is the recommended way to include config files depending on the presence of another package or recipe in an image?
For example, if the python3-supervisor recipe is included in a target image, what is the best way to write a custom recipe so that, in this case, the recipe installs a foo.conf file into /etc/supervisor/conf.d/ directory, and if the python3-supervisor recipe is not installed, then it does not do that?
I was looking at DISTRO_FEATURES as that is sometimes used for installing systemd unit files, but there's also IMAGE_FEATURES which might work too if the .conf files are put in their own packages - e.g. foo-conf installs /etc/supervisor/conf.d/foo.conf but only if, say, IMAGE_FEATURES_append = " supervisor-conf".
I wasn't able to find any recommendations in the bitbake manual on how to do this properly. Is there a good example in any of the OpenEmbedded layers?
In my case I'm writing my own recipes for my own applications, that would install their own .conf files into /etc/supervisor/conf.d/ if the python3-supervisor recipe is installed, and fall back to something else (or nothing) if it isn't, but the problem can be thought of as a more general "how do I gate the installation of certain files based on what's going into the image?"
Since there's no reliable way for one package to know what other packages from other recipes are installed in a particular image (each package is built independently of the packages from other recipes, and may or may not be included in the final image), it seems a good way to do this is with a custom DISTRO_FEATURES string.
For example, in local.conf one can specify an arbitrary custom feature, let's call it supervisord:
DISTRO_FEATURES_append = " supervisord"
Then in individual recipes, the following can be used to selectively install supervisord .conf files, if this feature is selected:
# myrecipe.bb
SRC_URI = " \
${#bb.utils.contains('DISTRO_FEATURES', 'supervisord', 'file://myrecipe.conf', '', d)} \
"
# ...
do_install() {
if ${#bb.utils.contains('DISTRO_FEATURES', 'supervisord', 'true', 'false', d)}; then
install -d ${D}${sysconfdir}/supervisor/conf.d
install -m 0644 ${WORKDIR}/myrecipe.conf ${D}${sysconfdir}/supervisor/conf.d/myrecipe.conf
fi
}
Unlike IMAGE_FEATURES, this does not require the features to be pre-defined in a .bbclass or somesuch. Instead, DISTRO_FEATURES is just treated as space-delimited list of strings, and the check for a particular string in this list is trivial.

Getting an installed module to recognize changes to config files

I have a package that uses config.json for some settings it uses. I keep the package locally rather than installing it from CPAN. My problem is when I make changes to config.json, the package doesn't recognize the changes since the config file's cached elsewhere, forcing me to run zef install --force-install or delete precomp. How can I ensure that the package always recognizes updates to the config file?
When you install packages using zef, it keeps them in the filesystem, but their names are converted into sha1, something like
/home/jmerelo/.rakudobrew/moar-2018.03/install/share/perl6/site/sources/81436475BD18D66BFD96BBCEE07CCCDC0F368879
zef keeps track of them, however, and you can locate them using zef locate, for instance:
zef locate lib/Zef/CLI.pm6
You can run that from a program, for instance this way:
sub MAIN( Str $file ) {
my $location = qqx/zef locate $file/;
my $sha1 = ($location ~~ /\s+ \=\> \s+ (.+)/);
say "$file → $sha1[0]";
}
which will return pretty much the same, except it will give you the first location of the file you give it at the command line:
lib/Zef/CLI.pm6 → /home/jmerelo/.rakudobrew/moar-2018.03/install/share/perl6/site/sources/81436475BD18D66BFD96BBCEE07CCCDC0F368879
You probably need to install your config.json file in a resources directory (which is the preferred location) and then use something like that.
That said, probably actually installing a module you're testing is not the best strategy. If you're still testing things, it's probably better if you just keep it in the directory you're working with and use perl6 -I<that directory> or else use lib <that directory> is probably a better option. You can just delete that when you release, or keep it, since that only adds another directory to the search path and will not harm the released module.

How do I install local modules?

For creating and maintaining Perl 5 modules, I use Dist::Zilla. One of my favorite features is being able to install local modules.
However, with Perl 6, I'm not sure how to install local modules. Sure, I can use use lib:
use lib 'relative/path';
use My::Awesome::Module;
But, I'd really like to be able to install My::Awesome::Module, so that all I had to do was use it:
use My::Awesome::Module;
One way to accomplish this, would be setting PERL6LIB, but that still isn't "installing" a module like zef install ./My-Awesome-Module.
Update: Looks like I need to craft an appropriate META6.json file.
To understand how to setup a module to be understood by toolchain utilities, see Preparing the module. Typically this means adding a META6.json file that describes the distribution, including quasi-manifest elements such as which files you really meant to include/provide. Once the META6.json is created the module is ready to be installed:
zef install ./My-Awesome-Module
which (assuming no uninstalled dependencies) is essentially:
my $install-to-repo = CompUnit::RepositoryRegistry.repository-for-name("site");
my $preinstall-dist = Distribution::Path.new("./My-Awesome-Module");
$install-to-repo.install($preinstall-dist);
Starting with rakudo 2019.01 you can, assuming no uninstalled dependencies, install a local distribution without a META6.json -- but this is purely developmental nicety that won't work on complex setups that do not have e.g. namespacing and file structures that can be inferred.
my $read-from-repo = CompUnit::Repository::FileSystem.new(prefix => "./My-Awesome-Module/lib");
my $install-to-repo = CompUnit::RepositoryRegistry.repository-for-name("site");
my $some-module-name = "My::Awesome::Module"; # needed to get at the Distribution object in the next step
my $preinstall-dist = $read-from-repo.candidates($some-module-name).head;
$install-to-repo.install($preinstall-dist);
I'm writing a bin that may help you: http://github.com/FCO/6pm

CentOS6 - Backup all RPMs and installed programs

Is there away to backup all installed applications/RPMs/packages/ (even repositories) as is (with same exact versions/patches) on 1 script that can re-install them on a fresh bare bone server of the same specs
Note: I can't do any image or CloneZilla tricks
Note: there are some 3rd party software that is not offered by any repos ... solution should contain a backup of these packages (preferably all)
thanks!
As noted in a comment, you can backup the RPM database, but that is only one part of replicating your configuration to another server:
RPM's database records almost all of the information regarding the packages you have installed. Using the database, you could in principle do something like a script that used cpio or pax to append all of the files known to the RPM database to a suitably large archive area. rpm -qa gives a list of packages, and rpm -qlpackage gives a list of files for the given package.
However, RPM's database does not necessarily record files created by package %pre and %post scripts.
Likewise, it does not record working data (such as a MySQL database) that may be in /var/lib.
To handle those last two cases, you are going to have to do some analysis of your system to ensure that you do not leave something behind. rpm -qfpathname can tell you who owns a given file- or directory. What you have to do is to check for cases where RPM does not know who owns it.

Zope buildout for development environment

Is it possible to have Zope2 buildout unpack python files into their normal directories like how standard python modules do, and not under separate .egg directories? It makes looking for files easier when debugging.
The 'regular' setup doesn't support namespaced packages very well, where multiple eggs share a top-level name (such as plone.app and zope, etc.)
Use the collective.recipe.omelette buildout recipe to build a 'regular' setup, it uses symlinks to give you a searchable structure of all eggs used.
[buildout]
parts =
omelette
...
[omelette]
recipe = collective.recipe.omelette
eggs = ${instance:eggs}
You'll find the result in parts/omelette. Note that this structure uses symlinks, so if you use tools like find or ack make sure to configure them to follow symlinks (e.g. find parts/omelette -L and ack --follow).
The omelette directory structure is not used by Python itself, it is purely intended for presenting a coherent library structure from all eggs used in your buildout.
Note that for Windows, you need to install the junction utility as well for the recipe to work.