use FPM options in EASYBUILD for pakaging - fpm

I make rpm packages by easybuild below command
module load fpm
eb --package Perl-5.20.1-GCC-4.9.2-bare.eb --robot
now suppose I want to use some of FPM options(you can see them by "fpm -h") . for example by "--rpm-group" in fpm I can set a group for the installed packages. How can I use these options by eb?

You can specify option for the packaging tool being used (in this case FPM) via the --package-tool-options command line option for 'eb', so:
eb --package Perl-5.20.1-GCC-4.9.2-bare.eb --robot --package-tool-options "--rpm-group"
Or, if you want to do this every time, you can define the $EASYBUILD_PACKAGE_TOOL_OPTIONS environment variable:
export EASYBUILD_PACKAGE_TOOL_OPTIONS='--rpm-group'
This will be picked up by the eb command, see the output of eb --show-config after defining $EASYBUILD_PACKAGE_TOOL_OPTIONS

Related

Can't create extension pg_cron in bitnami:postgres docker container?

I am running a docker container with a Database which is working with the bitnami:postgres image. It is all working fine but now I want to install pg_cron to schedule autmatic jobs.
I installed it and it is available as a possible extension in Dbeaver. But when I select and install it I get the message:
ERROR: extension "pg_cron" must be installed in schema "pg_catalog"
When i am using the command
Create Extension pg_cron;
I get:
ERROR: pg_cron can only be loaded via shared_preload_libraries
Hinweis: Add pg_cron to the shared_preload_libraries configuration variable in postgresql.conf.
I tried to change the postgresql.conf file but when I restart my docker container to apply the changes shared_preload_libraries is always reset to pgaudit.

LMOD TCL execute bash script while loading module

having a small problem where you can help me out. On our new cluster we use LMod as environmental module system.
Creating a Module TCL Script for OpenFOAM, a system-dependent bashrc file need to be loaded.
This is the TCL script which I am using on another module system, it works fine. I am not able to execute the "source" command line in Lmod, what I am missing here?
#%Module1.0#####################################################################
##
## modules software/openfoam_v1812
##
## /opt/software/openfoam/openfoamv1812/OpenFOAM-v1812
proc ModulesHelp { } {
global version modroot
puts stderr "software/OpenFOAM-v1812 - sets the Environment for OpenFOAM-v1812 (openfoam.com)"
}
module-whatis "Sets the environment for using OpenFOAM-v1812"
# for Tcl script use only
set VERSION v1812
set OpenFOAM_PATH /opt/software/openfoam/openfoam${VERSION}/OpenFOAM-${VERSION}
set FOAM_INST_DIR /opt/software/openfoam/openfoam${VERSION}
puts stdout "source /opt/software/openfoam/openfoam${VERSION}/OpenFOAM-${VERSION}/etc/bashrc;"
I am not an expert, but I have recently come across a similar problem, in my case for activating Anaconda Python in a model. In my case, the solution was to use the 'execute' command in LMod
https://lmod.readthedocs.io/en/latest/050_lua_modulefiles.html
which has the documentation:
execute {cmd=”<any command>”,modeA={“load”}}
Run any command with a certain mode. For example execute {cmd=”ulimit
-s unlimited”,modeA={“load”}} will run the command ulimit -s unlimited as the last thing that the loading the module will do.
Hope this helps

Global environment variables for gitlab CI runner

I am working to set up a gitlab runner for multiple projects, and we want to be able to set up environment variables for all of the projects. I tried to set global variables in the .bashrc for both the gitlab-runner and root users but it did not recognize them during the CI script. What is the correct location to declare global environment variables?
You can also inject environment variables to your gitlab-runner directly in the commandline, as the gitlab-runner exec docker --help states:
OPTIONS: ..
--env value Custom environment variables injected to build environment [$RUNNER_ENV] ..
Here is a small example how I use it in a script:
Change the declarations as needed:
declare jobname="your_jobname"
declare runnerdir="/path/to/your/repository"
Get the env file into a bash array.
[ -f "$runnerdir/env" ] \
&& declare -a envlines=($(cat "$runnerdir/env"))
declare -a envs=()
for env in "${envlines[#]}"; do
envs+=(--env "$env")
done
And finally pass it to the gitlab-runner.
[ -d "$runnerdir" ] && cd "$runnerdir" \
&& gitlab-runner exec docker "${envs[#]}" $jobname \
&& cd -
You can define environment variables to inject in the runner's config.toml file. See the advanced runner configuration documentation in the [[runners]] section.
There doesn't seem to be a way to specify environment variables in the GitLab UI just for a specific runner.
With GitLab 13.1 (June 2020), you now have:
Instance-level CI/CD variables
GitLab now supports instance-level variables.
With this ability to set global variables, you no longer need to manually enter the same credentials repeatedly for all your projects.
This MVC introduces access to this feature by API, and the next iteration of this feature will provide the ability to configure instance-level variables directly in the UI.
See Documentation and issue.
Consider using an external persistent secret storage service like Vault or Keywhiz
Disclaimer: I am not associated nor used any of the above services
I have added export MY_VAR="FOO" to gitlab-runner's .bashrc, and it works.
echo export MY_VAR=\"FOO\" >> /home/gitlab-runner/.bashrc
Check which type of executor do you use? (shell, kubernetes, docker-ssh, parallels...) I use shell executor.
Check what type of shell does gitlab-runner use? (How to determine the current shell I'm working on) And edit the proper rc file for that.
Check the Gitlab CI Runner user.
I suggest dump all environment variables for further debugging, by add env to the .gitlab-ci.yml's script:
#.gitlab-ci.yml
job:
script: env
Making changes in ~/.bash_profile NOT ~/.bashrc.
See my answer
You can easily setup Variables in the GitLab Settings:
Project-level variables can be added by going to your project's Settings > CI/CD, then finding the section called Variables.
To make sure your variables are only used in
See:
https://docs.gitlab.com/ee/ci/variables/#variables

How to run scripts automatically after deployment in AWS using EB CLI?

I am trying to make a Django server on AWS. My django app depends on some mathematical python libraries like numpy, scipy, sklearn etc. However there is an issue for which I need to this after every deployment
sudo nano /etc/httpd/conf.d/wsgi.conf
---------------------------------------
add this line in the file
WSGIApplicationGroup %{GLOBAL}
---------------------------------------
sudo /etc/init.d/httpd reload
Basically I need "WSGIApplicationGroup %{GLOBAL}" in my wsgi.conf file otherwise I get 504. I am using a Custom AMI built on top of Amazon Linux 2014 and I am using EB CLI for deployment. However whenever I deploy the wsgi.conf is reset and it does not contain the line that I have added previously and I need to manually SSH into the EC2 instance and do this task myself. It gives a overhead on every deployment and its also not feasible once we scale up (cloning or creating instances also resets it). So is there a way that this will be automatically done after every deployment ?
The content of the wsgi.conf is fixed, so basically I can make a script easily to create it but the issue is how to trigger the script automatically ?
PS:I am new to AWS
You need to use AWS Elastic Beanstalk feature called .ebextensions: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
In your case you can't use Files or Commands sections, because:
The commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
You need to use Container_commands section:
They run after the application and web server have been set up and the
application version file has been extracted, but before the
application version is deployed.
Example .ebextensions/01wsgi.config (not tested :-))
container_commands:
apache_reload:
command: |
echo "WSGIApplicationGroup %{GLOBAL}" >> /etc/httpd/conf.d/wsgi.conf
/etc/init.d/httpd reload
Feel free to tweak my example as you want, for example you can copy your temporary wsgi.conf file somewhere and then replace original in Container_commands section.

how do I set up K Version Manager (KVM) properly so I can run KVM anywhere in powershell later?

How do I set up K Version Manager (KVM) properly so I can run KVM by typing "KVM" anywhere in powershell later? Do I have to add pathing to the HOME repo?
Run kvm use and pass the -p (persistent) argument. This will add kvm to the user's path or to the system path, if combined with -g.
If you run kvm help you will get all the available arguments:
kvm use <semver>|<alias>|none [-x86][-x64] [-svr50][-svrc50] [-p|-persistent] [-g|-global]
<semver>|<alias> add KRE bin to path of current command line
none remove KRE bin from path of current command line
-p|-persistent add KRE bin to PATH environment variables persistently
-g|-global combined with -p to change machine PATH instead of user PATH
Try out the Getting Started section at aspnet\Home.
Once you execute the kvmsetup.cmd it will add kvm to the path for all future powershell\command line sessions.