We are in need to dynamically pass a variable during RPM installation and capture it in the spec file to trigger a script in %post
Following is the command
RPM Install Command
sudo rpm -Uvh --force abc.noarch.rpm --define '_ip 10.1.2.4' --define 'version 3'
**abc.spec**
Name: abc
Version: 1
Release: 1.0
Summary: Test
%{!?_ip: %define _ip 0.0.0.0 }
%{!?_version: %define _version 0 }
%post
echo "ip:::: %{_ip}"
echo "VESION:::: %{_version}"
So when I run the RPM with the above command , I get the following output.
[root#test solution]$ sudo rpm -Uvh --force abc.noarch.rpm --define '_ip 10.1.2.4' --define 'version 3'
Preparing... ################################# [100%]
Updating / installing...
1:abc ################################# [ 50%]
ip:::: 0.0.0.0
VESION:::: 0
Though i pass a different value in the CLI command , I still see that the argument which I pass is not been captured in the spec file.
Need inputs on how to capture the values which im passing the CLI .
The option --define defines macro. Macros are evaluated when building an RPM from SRC.RPM using rpmbuild. The binary (does not matter if arch or noarch) package has every macro already expanded. Even the %bindir etc.
The RPM ecosystem was designed as non-interactive. This is a big difference from the DEB ecosystem when questions can be raised using debconf.
You cannot workaround it. You cannot ask even by directly reading STDIN as rpm close this descriptor before executing scriptlets.
The best practice is to use configuration files. E.g. /etc/abc/ip.conf. And:
either instruct user to manualy (or using Ansible) alter that file and store their correct data
or do NOT distribute /etc/abc/ip.conf in main abc package and instead require abc-config. And then create one or more config packages which will be like:
Package: abc-testing-config
Provides: abc-config
...
%files
/etc/abc/ip.conf
And you then instruct users to install abc abc-test-config. Or it can be abc abc-EMEA-config etc....
Related
I have been using Github actions to build and publish a docker image to the Github Container Registry according to the Documentation. I am getting an inconsistency behavior when I pull the new image and test it locally.
I have a CMake project in C++ that runs a simple hello world with an INTERFACE and SHARED library.
When I build a docker image locally and test it, this is the output (which is working fine):
*************************************
*** DBSCAN Cluster Segmentation ***
*************************************
--cloudfile: required.
Usage: program [options]
Optional arguments:
-h --help shows help message and exits [default: false]
-v --version prints version information and exits [default: false]
--cloudfile input cloud file [required]
--octree-res octree resolution [default: 120]
--eps epsilon value [default: 40]
--minPtsAux minimum auxiliar points [default: 5]
--minPts minimum points [default: 5]
-o --output-dir output dir to save clusters [default: "-"]
--ext cluster output extension [pcd, ply, txt, xyz] [default: "pcd"]
-d --display display clusters in the pcl visualizer [default: false]
--cal-eps calculate the value of epsilon with the distance to the nearest n points [default: false]
In Github Actions I am using this workflow:
name: Demo Push
on:
push:
# Publish `master` as Docker `latest` image.
branches: ["test-github-packages"]
# Publish `v1.2.3` tags as releases.
tags:
- v*
# Run tests for any PRs.
pull_request:
env:
IMAGE_NAME: dbscan-octrees
jobs:
# Push image to GitHub Packages.
# See also https://docs.docker.com/docker-hub/builds/
push:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout#v3
with:
submodules: recursive
- name: Build image
run: docker build --file Dockerfile --tag $IMAGE_NAME --label "runnumber=${GITHUB_RUN_ID}" .
- name: Test image
run: |
docker run --rm \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
dbscan-octrees:latest
- name: Log in to registry
# This is where you will update the PAT to GITHUB_TOKEN
run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u $ --password-stdin
- name: Push image
run: |
IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
# Strip git ref prefix from version
VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
# Strip "v" prefix from tag name
[[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')
# Use Docker `latest` tag convention
[ "$VERSION" == "master" ] && VERSION=latest
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
docker tag $IMAGE_NAME $IMAGE_ID:latest
docker push $IMAGE_ID:latest
The compilation and test steps are working fine with no errors (check this run). The problem is with the newly generated image after the push to the Github Container registry since when I pulled it locally to test it, the program is crashing with an "Illegal Instruction (core dumped)" error. I have tried to debug to find the problem and there is not a compilation error, link error, or something like that. I found out that this might be related to the linking part of the SHARED library, but it is strange because if the image is working when is built in the Github Action runner, I don't understand why fails the pushed image.
I found this post where the error might be something related to Github that changes the container during the installation.
Hope someone can help me with this.
This is the output in the Test image step on the workflow:
workflow
This is the error after pulling the newly generated image and testing it locally: error
I have even compared the bad binary file (Github version in the docker image) with the good version (Compiled version locally) using ghex, and the binary file generated by GitHub after pushing a new image is a little bigger than the good one.
binary comparision
binary sizes
Issue
CPU AVX instruction set not supported by local PC
Solution
Enable compilation flags in CMake to disable AVX support
Description
After digging using analysis tools for binaries files, debugging, etc. I discovered that the problem was related to the AVX CPU support in the GitHub action runner. My Computer does not support AVX optimized instructions, so I have to enable a compilation flag for my shared libraries in order to disable AVX support. This compilation flag will tell the Github Action runner to compile the project with no AVX CPU support or CPU optimizations which is the standard environment in GitHub Actions.
Analysis tools:
ldd binary
strace binary <-- this one allows me to identify the SIGEV_SIGNAL error code
container-diff
log error
Using the strace tool I got the next error:
--- SIGILL {si_signo=SIGILL, si_code=ILL_ILLOPN, si_addr=0x55dcd7324bc0} ---
+++ killed by SIGILL (core dumped) +++
Illegal instruction (core dumped)
This error allowed me to find the error code and after searching on the internet I found a solution to my specific problem since my project was using Point cloud Library (PCL), I compiled my project with -mno-avx, according to this post.
Solution
In the CMakeList.txt file for each SHARED library define the next compilation flag:
target_compile_options(${PROJECT_NAME} PUBLIC -mno-avx)
New issue
I have resolved the major issue, but now one of my shared libraries has the same error. I will try to fix it with one of these (I think) flags.
After making a lot of tests and using CPU-X software and detecting the proper architecture-specific options in my PC with the following command via GCC:
gcc -march=native -E -v - </dev/null 2>&1 | grep cc1
output:
/usr/lib/gcc/x86_64-linux-gnu/9/cc1 -E -quiet -v -imultiarch
x86_64-linux-gnu - -march=haswell -mmmx -mno-3dnow -msse -msse2
-msse3 -mssse3 -mno-sse4a -mcx16 -msahf -mmovbe -mno-aes -mno-sha
-mpclmul -mpopcnt -mabm -mno-lwp -mno-fma -mno-fma4 -mno-xop
-mno-bmi -mno-sgx -mno-bmi2 -mno-pconfig -mno-wbnoinvd -mno-tbm
-mno-avx -mno-avx2 -msse4.2 -msse4.1 -mlzcnt -mno-rtm -mno-hle
-mrdrnd -mno-f16c -mfsgsbase -mno-rdseed -mno-prfchw -mno-adx
-mfxsr -mno-xsave -mno-xsaveopt -mno-avx512f -mno-avx512er
-mno-avx512cd -mno-avx512pf -mno-prefetchwt1 -mno-clflushopt
-mno-xsavec -mno-xsaves -mno-avx512dq -mno-avx512bw -mno-avx512vl
-mno-avx512ifma -mno-avx512vbmi -mno-avx5124fmaps
-mno-avx5124vnniw -mno-clwb -mno-mwaitx -mno-clzero -mno-pku
-mno-rdpid -mno-gfni -mno-shstk -mno-avx512vbmi2 -mno-avx512vnni
-mno-vaes -mno-vpclmulqdq -mno-avx512bitalg -mno-avx512vpopcntdq
-mno-movdiri -mno-movdir64b -mno-waitpkg -mno-cldemote
-mno-ptwrite --param l1-cache-size=32
--param l1-cache-line-size=64 --param l2-cache-size=3072
-mtune=haswell -fasynchronous-unwind-tables
-fstack-protector-strong -Wformat -Wformat-security
-fstack-clash-protection -fcf-protection
Final solution
I have fixed the execution error with the following flags in my SHARED library:
# MMX, SSE(1, 2, 3, 3S, 4.1, 4.2), CLMUL, RdRand, VT-x, x86-64
target_compile_options(${PROJECT_NAME} PRIVATE -Wno-cpp
-mmmx
-msse
-msse2
-msse3
-mssse3
-msse4.2
-msse4.1
-mno-sse4a
-mno-avx
-mno-avx2
-mno-fma
-mno-fma4
-mno-f16c
-mno-xop
-mno-bmi
-mno-bmi2
-mrdrnd
-mno-3dnow
-mlzcnt
-mfsgsbase
-mpclmul
)
Now, the docker image stored in the GitHub Container Registry is working as expected on my local PC.
Related posts
What is the proper architecture-specific options (-m) for Sandy Bridge based Pentium?
using cmake to make a library without sse support (windows version)
https://github.com/PointCloudLibrary/pcl/issues/5248
Compile errors with Assembler messages
https://github.com/PointCloudLibrary/pcl/issues/1837
I am using a simple conan file example from a repository of examples. I would like to generate a lockfile, but when I try the command, I get this error:
.../folly/basic $ conan lock create
ERROR: Specify the 'name' and the 'version'
When I try to do so, following the documentation, I still get the same error:
.../folly/basic $ conan lock create --name=libb --version=0.2
ERROR: Specify the 'name' and the 'version'
.../folly/basic $ conan lock create --name libb --version 0.2
ERROR: Specify the 'name' and the 'version'
Does anybody have any advice? I am sure it's something obvious, but I am new to conan.
The Conan lock create requires a conanfile.py file which is not present in your example. That example uses a simple conanfile.txt to install the project dependencies (Folly and OpenSSL).
You still can generate the lock file by installing those requirements:
$ mkdir build && cd build/
$ conan install ..
...
$ ls
conan.lock conanbuildinfo.cmake conanbuildinfo.txt conaninfo.txt graph_info.json
Also, note that you are not passing the conanfile path, as required by the command:
.../folly/basic $ conan lock create
ERROR: Specify the 'name' and the 'version'
Instead, you should pass the path where the conanfile.py is installed:
$ conan lock create conanfile.py
However, if you want to generate a lock file only for a single reference (e.g. Folly), you can directly do it by the follow command:
$ conan lock create --reference folly/2020.08.10.00# -r conancenter
When I'm building a Singularity container I'd like to read environment variables from the host system in the %post section. I've been looking online for a way to achieve this, but to no avail. I'm starting to question if this is even possible at the moment, but I can't find any mentions of it being possible/impossible.
Example:
Singularity definition file: recipe
BootStrap: docker
From: continuumio/anaconda3
%runscript
%post
echo $TEST_ENV_VARIABLE
On the host system / OS
export TEST_ENV_VARIABLE='foo'
sudo singularity build test.sif recipe
prints only a blank line when echoing TEST_ENV_VARIABLE.
If there is no way of reading host system's environment variables in the %post section, are there any other ways of passing arguments into the recipe that could be used build-time?
That is not currently possible, though there is an open issue for that functionality. I'm not personally a fan of dynamic build options as it makes it harder to guarantee reproducibility.
If you do want something more dynamic, you could use a template to create different definition files. A very simplistic example:
$ cat gen_def.py
#!/usr/bin/env python3
import sys
my_def = """BootStrap: docker
From: continuumio/anaconda3
%post
echo This is {0}
echo This is {1}"""
print(my_def.format(*sys.argv[1:]))
$ ./gen_def.py one two > Singularity.custom
$ sudo singularity build test.sif Singularity.custom
I want to build a container from my conda environment following this post. However, I get the following error: '/bin/sh: 1: cannot create ~/.bashrc: Directory nonexistent'. I am using a vagrant VM to build my image and would be grateful for any help.
Editing the .bashrc, aside from failing, will not be helpful as the shell loaded by singularity is explicitly --norc. You want to use the $SINGULARITY_ENVIRONMENT variable in %post to have the values available.
Something along these lines:
%post
# You may need to install some pre-reqs your host system has installed outside of conda, e.g.
# apt update && apt install -y build-essential make zlib
ENV_NAME=$(head -1 environment.yml | cut -d' ' -f2)
echo ". /opt/conda/etc/profile.d/conda.sh" >> $SINGULARITY_ENVIRONMENT
echo "conda activate $ENV_NAME" >> $SINGULARITY_ENVIRONMENT
. /opt/conda/etc/profile.d/conda.sh
conda env create -f environment.yml -p /opt/conda/envs/$ENV_NAME
I listed a few libraries that you probably have installed in your current machine that might not be installed in the slim docker image. You can install them via apt or conda, depending on your preference. If it does happen though, it'll be specific to your environment.yml and host OS, so you'll have to iterate through until the build succeeds.
This is getting rather maddening - I'm trying to build an RPM out of some BASH scripts which work as Nagios plugins. I keep getting:
error: Installed (but unpackaged) file(s) found:
/usr/lib64/nagios/plugins/netappassigncheck
/usr/lib64/nagios/plugins/netappassignprep
In the %files directive of my spec file I have tried most of the combos that have been suggested here and on various other internet forums:
/usr/lib/nagios/plugins/*
/usr/lib/nagios/plugins/netappassigncheck
/usr/lib/nagios/plugins/netappassignprep
%dir /usr/lib/nagios/plugins/
And currently I am on
%dir %{_libdir}/nagios/plugins/
This is why my most recent error output is lib64, previous errors when quoting the full path were /usr/lib/...
These are the only 2 files that should make up the package as well.
Here is my .spec file
Name: netappautoassign
Summary: A set of Nagios Plugins for automatically assigning disks to a Netapp
Version: 1.0
Release: 1
License: %{license}
Group: Applications/System
Source: %{source}
URL: Reserved
Vendor: %{vendor}
Packager: %{packager}
BuildArch: noarch
Requires: bash, grep, util-linux, coreutils, expect, openssh-clients, bc, sed
Provides: netappassignprep, netappassigncheck
%description
Since Netapp's autoassign function may lead to disks being assigned to the
wrong head these NAGIOS plugins will ensure disks are added to the correct
head when replaced.
%prep
%setup -q
%build
%install
rm -rf %{buildroot}
install -d %{buildroot}%{_libdir}/nagios/plugins
cp netappassigncheck %{buildroot}%{_libdir}/nagios/plugins/
cp netappassignprep %{buildroot}%{_libdir}/nagios/plugins/
%files
%defattr(755,root,root,755)
%dir %{_libdir}/nagios/plugins/
%clean
rm -rf %{buildroot}
%post
And here's my ~/.rpmmacros
%_topdir %(echo $HOME)/rpmbuild
%_tmppath %{_topdir}/tmp
%buildroot %{_tmppath}/%{name}-%{version}
%license RESERVED
%source %{name}-%{version}.tar.gz
%vendor REDACTED
%packager REDACTED
EDIT - SOLVED
I'm not sure if this is a bug or desired behaviour, but it would appear that during the build setion the %{buildroot} variable was not being read in from .rpmmacros Having moved this variable into the main spec file the RPM is now built.
I'm not sure if this is a bug or desired behaviour, but it would appear that during the file verification section, it was reading in all the current active plugins under the root file system and not the %{buildroot}.
I suspected that the %{buildroot} variable was not being read in from .rpmmacros at this stage, although it was for all other stages.
I moved the declaration of %{buildroot} into my main .spec file and the build is now working!