Custom error handling in makefile - error-handling

I need to build a C program which requires a particular Linux package. I set a variable, PACKAGENOTIFICATION, to a shell command which is supposed to check if the package is installed for Ubuntu and print a notification if not:
PACKAGENOTIFICATION := if cat /etc/issue | grep Ubuntu -c >>/dev/null; then if ! dpkg -l | grep libx11-dev -c >>/dev/null; then echo "<insert notification here>"; fi; fi
[...]
maintarget: dependencies
$(PACKAGENOTIFICATION)
other_commands
Unfortunately, while making the dependencies, it runs into the files which need the package, and errors out before executing my PACKAGENOTIFICATION. An alternative formulation is to make a separate target whose only purpose is to run the notification:
maintarget: notify other_dependencies
commands
notify:
$(PACKAGENOTIFICATION)
However, since this phantom dependency always needs to be executed, make never reports that the program is up to date.
What's the best way to have make always report as up to date, but also execute my notification before it dies?
Thanks!

If your version of Make supports "order-only" prerequisites, this will do it:
# Note the pipe
maintarget: other_dependencies | notify
commands
# This should be an order-only preq of any target that needs the package
notify:
$(PACKAGENOTIFICATION)
If not, there are other approaches.

Related

Github Actions, permission denied when using custom shell

I am trying to use a shell script as a custom shell in Github Actions like this:
- name: Test bash-wrapper
shell: bash-wrapper {0}
run: |
echo Hello world
However, when I try to run it, I get Permission denied.
Background: I have set up a chroot jail, which I use with QEMU user mode emulation in order to build for non-IA64 architectures with toolchains that lack cross-compilation support.
The script is intended to provide a bash shell on the target architecture and looks like this:
#!/bin/bash
sudo chroot --userspec=`whoami`:`whoami` $CROSS_ROOT qemu-arm-static /bin/bash -c "$*"
It resides in /bin/bash-wrapper and it thus on $PATH.
Digging a bit deeper, I found:
Running bash-wrapper "echo Hello world" in a GHA step with the default shell works as expected.
Running bash-wrapper 'echo Running as $(whoami)' from the default shell correctly reports we are running as user runner.
Removing --userspec from the chroot command in bash-wrapper (thus running the command as root) does not make a difference – the custom shell gives the same error.
GHA converts each step into a script file and passes it to the shell.
File ownership on these files is runner:docker, runner being the user that runs the job by default.
Interestingly, the script files generated by GHA are not executable. I suspect that is what is ultimately causing the permission error.
Indeed, if I modify bash-wrapper to set the executable bit on the script before running it, everything works as expected.
I imagine non-executable script files would cause all sorts of troubles with various shells, thus I would expect GHA would have a way of dealing with that – in fact I am a bit surprised these on-the-fly scripts are not executable by default.
Is there a less hacky way of fixing this, such as telling GHA to set the executable bit on temporary scripts? (How does Github expect this to be solved?)
When calling your script try running it like this:
- name: Test bash-wrapper
shell: bash-wrapper {0}
run: |
bash <your_script>.sh
Alternatively, try running this command locally and the commit and push the repository:
git update-index --chmod=+x <your_script>.sh

Problems getting Singularity Compose to work

I wrote a small test project for Singularity Compose, consisting of a small server application, with the following YAML file:
version: "1.0"
instances:
server:
build:
context: ./server
recipe: server.recipe
ports:
- 9999:9999
When I call singularity-compose build, it successfully builds server.sif. Calling singularity-compose up also seemingly works without error, and calling singularity-compose ps results in something that looks just fine:
+ singularity-compose ps
INSTANCES NAME PID IMAGE
1 server 4176911 server.sif
However, the server application does not work, calling my test client results in it saying that there is no answer from the server.
But if I run server.sif directly without compose, everything works just fine.
Also, I tripple checked, my test application listens to port 9999, thus should be reachable from the outside.
What did I do wrong?
Edit:
I also checked whether there actually is any process listening at port 9999 by calling sudo lsof -i -P -n | grep LISTEN, this is not the case. Only when I manually start server.sif without compose it shows me the process listening.
Edit:
I went into the Singularity Compose shell and tried to start the Server application directly in there, just as a test, and it resulted in Permission denied. Not sure if that means anything.
Edit:
I now gave the application execution rights within the shell and called in there, this works. Am now trying to add execution rights in the recipe. If that works, it would be kind of strange, as the executable was build right there, and thus should already have execution rights.
Edit:
I added chmod +x in my recipe both after building Server and before executing it. Doesn't work either.
Also checked whether any bridges exist using brctl show, this is not the case.
Edit: My recipe, adjusted by the input of tsnowlan in his answer below:
Bootstrap: docker
From: ubuntu:20.04
%files
connection.cpp
connection.h
main.cpp
server.cpp
server.h
server.pro
%post
# get some basics
apt update
apt-get install -y wget
apt-get install -y software-properties-common
# get C++ compiler
apt-get install -y g++
apt-get install -y build-essential
apt-get install -y build-essential cmake
# get Qt
apt-get install -y qt5-default
# compile
qmake
make
ls
%runscript
/Server
%startscript
/Server
Again, note that the application works just fine both when compiled and startet normally and when started within a Singularity image (but without Singularity Compose).
The ls at the end of the %post block is used to verify that the Server application was build successfully.
Please share the server.recipe, as it is difficult to identify should be/is happening without it.
Without having that, my guess is that you have a %runscript in your definition file, but no %startscript. When the image is executed directly or via singularity run image.sif, the contents of %runscript determine what happens. To emulate the docker-compose style, the singularity images are started as persistent instances. In this case, the %startscript block determines what runs. If it is empty, it will just start up and sit there doing nothing. This would explain why when run by hand it works but not when using compose.

Is it possible to abort a pacman installation from pre_install()

When creating a PKGBUILD file one can execute hooks at pre_install(), post_install(), etc.
I now have a custom arch linux pacman package that I need some custom checks done before it is installed to determine if it is safe to install or not.
I would like to run my test in the pre_istall() script and have pacman abort the installation if I say so in the script.
So, how can this be accomplished? So far all I have accomplished is getting an error message in the log but pacman continues with the istall...
I would not recommend this as it sounds like a code smell: in my opinion the pre_install() hook is designed to perform actions before package files are actually installed on your drive, but it is not meant to check whether the package should be installed.
In my opinion, such a check belongs to some other place out of the package.
You could call a command, which returns a non-zero exit-code, to cancel the build process. The simplest command I could think of is sh -c "exit 1", since just exit 1 results in an immediate exit without any proper cleanup.
Here is a simple example that checks if a file exists and cancels the build process if not:
prepare() {
if ! [ -f "/usr/bin/ffmpeg" ]; then
echo "Error: FFmpeg executable '/usr/bin/ffmpeg' is missing."
sh -c "exit 1"
fi
}
However, galaux is right. Usually such checks should happen upstream.

GNU Make Error 126, C:\Program is a directory

GNU make gives me a strange error message, which I do not understand.
gao#L8470-130213 ~
$ make
echo Test
C:\Program: C:\Program: is a directory
make: *** [test] Error 126
This is what I thought of verifying:
gao#L8470-130213 ~
$ less makefile
test:
echo Test
gao#L8470-130213 ~
$ which make
/c/Programx86/GnuWin32/bin/make
gao#L8470-130213 ~
$ /c/Progra~2/GnuWin32/bin/make.exe test
echo Test
C:\Program: C:\Program: is a directory
make: *** [test] Error 126
gao#L8470-130213 ~
$ make --version
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-pc-mingw32
It feels like some other program is trying to run at the end, and that its path includes some spaces. In that case, what program could it be, and how can I prevent it from running?
I have seen this thread and tried to disable my antivirus, which did not help.
I have also looked into permissions, but I am not sure if makefile needs execution rights. I can't seem to be able to change that anyway (running in bash on windows. makefile is not read-only when I check in explorer):
gao#L8470-130213 ~
$ ls -l makefile
-rw-r--r-- 1 gao Administ 21 Apr 15 14:53 makefile
gao#L8470-130213 ~
$ chmod +x makefile
gao#L8470-130213 ~
$ ls -l makefile
-rw-r--r-- 1 gao Administ 21 Apr 15 14:53 makefile
What is going on with make, what can I do?
It's not "some other program" that's trying to run, it's the echo command. Make prints the command to be run, echo test, but you never see the output (test) so that means it failed trying to find the echo program. Unfortunately I'm not very familiar with the vagaries of running GNU make on Windows: there are lots of different options. One possibility would be to get a newer version of GNU make; 3.81 is very old. 3.82 is now available and might work better for you.
Good info you added above about your environment re: using bash; that wasn't clear from the original question and on Windows there are many different ways to do things. You're using the mingw version of make; that version (as I understand it) does NOT use bash as the shell to run commands in: it's supposed to be used with native Windows environments which do not, certainly, have bash available. I believe that the version of make you have is invoking commands directly, and/or using command.com. Certainly not a UNIX shell like bash.
If you want to use bash you should set the SHELL make variable to the path of your bash.exe program. If you're using a Cygwin environment you can use the GNU make that comes with Cygwin which behaves more like a traditional make + shell.
Otherwise you'll need to write your commands using Windows command.com statements.
Again, I don't use Windows so this is mostly hearsay.
PS. The makefile does not need to be executable.
What is going on is that make doesn't like file names or directory names with spaces in them, such as Program Files. Neither do most of the utilities that makefiles typically rely on, such as the shell to execute commands with.
I create a junction from Program Files to ProgramFiles and use the latter whenever I encounter cases like this.

How to check if scp command is available?

I am looking for a multiplatform solution that would allow me to check if scp command is available.
The problem is that scp does not have a --version command line and when called without parameters it will return with exit code 1 (error).
Update: in case it wasn't clear, by multiplatform I mean a solution that will work on Windows, OS X and Linux without requiring me to install anything.
Use the command which scp. It lets you know whether the command is available and it's path as well. If scp is not available, nothing is returned.
#!/bin/sh
scp_path=`which scp || echo NOT_FOUND`
if test $scp_path != "NOT_FOUND"; then
if test -x ${scp_path}; then
echo "$scp_path is usable"
exit 0
fi
fi
echo "No usable scp found"
sh does not have a built-in which, thus we rely on a system provided which command. I'm not entirely sure if the -x check is needed - on my system which actually verifies if the found file is executable by the user, but this may not be portable. On the rare case where the system has no which command, one can write a which function here.