Github Actions, permission denied when using custom shell - permissions

I am trying to use a shell script as a custom shell in Github Actions like this:
- name: Test bash-wrapper
shell: bash-wrapper {0}
run: |
echo Hello world
However, when I try to run it, I get Permission denied.
Background: I have set up a chroot jail, which I use with QEMU user mode emulation in order to build for non-IA64 architectures with toolchains that lack cross-compilation support.
The script is intended to provide a bash shell on the target architecture and looks like this:
#!/bin/bash
sudo chroot --userspec=`whoami`:`whoami` $CROSS_ROOT qemu-arm-static /bin/bash -c "$*"
It resides in /bin/bash-wrapper and it thus on $PATH.
Digging a bit deeper, I found:
Running bash-wrapper "echo Hello world" in a GHA step with the default shell works as expected.
Running bash-wrapper 'echo Running as $(whoami)' from the default shell correctly reports we are running as user runner.
Removing --userspec from the chroot command in bash-wrapper (thus running the command as root) does not make a difference – the custom shell gives the same error.
GHA converts each step into a script file and passes it to the shell.
File ownership on these files is runner:docker, runner being the user that runs the job by default.
Interestingly, the script files generated by GHA are not executable. I suspect that is what is ultimately causing the permission error.
Indeed, if I modify bash-wrapper to set the executable bit on the script before running it, everything works as expected.
I imagine non-executable script files would cause all sorts of troubles with various shells, thus I would expect GHA would have a way of dealing with that – in fact I am a bit surprised these on-the-fly scripts are not executable by default.
Is there a less hacky way of fixing this, such as telling GHA to set the executable bit on temporary scripts? (How does Github expect this to be solved?)

When calling your script try running it like this:
- name: Test bash-wrapper
shell: bash-wrapper {0}
run: |
bash <your_script>.sh
Alternatively, try running this command locally and the commit and push the repository:
git update-index --chmod=+x <your_script>.sh

Related

Apache Airflow command not found with SSHOperator

I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim

Why is $PATH different when executing commands via SSH and libssh?

I'm trying to run a command on a remote host via libssh2 as wrapped by the ssh2 Rust crate.
So I would like to run the command cargo build, but when I try to run it via libssh, I get the error:
cargo: command not found
However, when I ssh into the server manually from the command line everything works fine.
I have noticed that the $PATH is different when running ssh from the command line and libssh as well:
for instance when I echo $PATH
ssh gives me:
/home/<user>/.cargo/bin:/usr/share/swift/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bi
while libssh gives me:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
So it looks like what's happening is that the modifications made to $PATH inside .bashrc and .profile are not making it in when running via libssh.
I also get the same behavior if I run /bin/bash -c "echo ${PATH}"
Why would this be the case, and is there any way to get the same behavior in both these cases?
Please take a look at that question.
TL;DR A login shell first reads /etc/profile and then ~/.bash_profile. A non-login shell reads from /etc/bash.bashrc and then ~/.bashrc.

Flatpak Intellij Idea - problem with subversion executable

After installing Intellij Idea using flatpak on Clear Linux I'm not able to make it run svn executable.
I added ---filesystem=host to flatpak permissions and tried to set executable path to /run/host/usr/bin/svn but with no luck (path is available/exists, though Intellij keeps complain)
svn command is normally available from system terminal.
When I try to run /run/host/usr/bin/svn command via Intellij Idea built-in terminal, I've got error that library is not available:
sh-5.0$ /run/host/usr/bin/svn
/run/host/usr/bin/svn: error while loading shared libraries: libsvn_client-1.so.0: cannot open shared object file: No such file or directory
I also tried set flatpak-spawn. Following command works perfectly fine in Intellij Idea built-in terminal:
/usr/bin/flatpak-spawn --host /usr/bin/svn, though when set as path to svn executable still gives me Intellij Idea error:
"The path to Subversion executable is probably wrong"
Could anybody please help with making it work?
TLDR: You probably need to add the path to svn into your IntelliJ terminal Path.
Details:
It looks like you are having a path issue. I had a similar problem running kubectl running PyCharm installed from a flatpak on Pop_Os.
If I try to run kubectl I see the following:
I have kubectl installed in /usr/local/bin. This is a screenshot from my 'normal' terminal.
In the PyCharm terminal this location is mounting under /run/host/usr/local/bin/.
If I look at my path in the PyCharm terminal, it is not there.
So I'll add the /run/host/usr/local/bin/ to my path and I can then run kubectl:
To make sure this comes up all the time, I need to add the PATH to the Terminal settings:
I can now execute any of the commands in my /usr/local/bin dir.
I found a really ugly solution for dealing with SVN with the JetBrains family, which does actually answer the question. But in a very roundabout way. Unfortunately Alex Nelson's solution didn't work for me.
You would think the Flatpak would come with a valid SVN, since it's actually part of the expected requirements for the program...
When in the terminal, you can run
cd ..
/usr/bin/flatpak-spawn --host vim ./svn
Then press i to go into input mode, then paste the following in the opened text file (Basically what it does is create an executable which passes it to the flatpak-spawn invocation):
#!/bin/bash
/usr/bin/flatpak-spawn --host /usr/bin/svn $#
Save and quit from vim (ESC, then :wq!). Make it executable:
chmod +x svn
Then in IntelliJ's menu, set the "path to svn" to
/home/<yourusername>/IdeaProjects/svn
It's worked for everything I've tried... Hope this helps out anyone else who was struggling with this.
I am using a similar solution to caluga.
#!/bin/sh
cd
exec /usr/bin/env -- flatpak-spawn --host /usr/bin/env -- svn "$#"
exec makes it replace the wrapper script process so the wrapper script process can end.
I'm using /bin/sh instead of /bin/bash as bash features are not needed.
using /usr/bin/env, but maybe not necessary if PATH is set up right.
remember to quote "$#" in case there are spaces in arguments.
I am putting it in ~/.local/bin and referencing it with its absolute path in the IntelliJ settings (Settings -> Version Control -> Subversion -> Path to Subversion executable).
I also was running into problems with IntelliJ saying that /app/idea-IC path does not exist. Figured that something outside the flatpak (i.e. svn or env) was trying to change directory to the working directory from where the wrapper script was invoked (inside the flatpak). Using cd allows the wrapper script to change to a directory that exists both inside the flatpak and on the host.
Fedora Silverblue or toolbox users might want to use dev tools inside their toolbox, in which case you can do:
#!/bin/sh
cd
exec /usr/bin/env -- flatpak-spawn --host toolbox run svn "$#"

Setup Amazon S3 backup on QNAP using s3cmd

I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1

Local environment not set in MPI process

When I run mpiexec on a few computers some of them don't automatically load their local environments - they don't seem to run their bashrc or bash_profile files. When I ssh into these troublesome computers everything is fine (the environment is all there). What else could be missing?
If I run
mpiexec -np 1 --host remotehost printenv
I get a very small result. However if I do the following
ssh remotehost
printenv
I get a much larger and more comprehensive result. What is the difference between these two?
MPI jobs run in non-interactive shells which do not load .bashrc. Rather than having each job load its own .bashrc, it is usually better to set the environment variables in the call to mpiexec. MPICH will pass all environment variables from the launching process by default, but with OpenMPI you need to use the --envall option.