dot env variables in CI/CD runtime environment - vue.js

I'm trying to setup a CI/CD with GitLab and what I want to achieve is to replace all the variables in .env.production file with the ones stored in the Gitlab environment variable.
I search a lot and I could not find any specific example for Vuejs. Anyone did this already, or knows how to do it?

hereafter is an exemple of one of my .gitlab-ci.yaml :
build:
stage: build
image: node:12.18.3-buster
script:
# Set environment variables
- export VUE_APP_API_BASE_PATH="$API_BASE_PATH"
- npm install
- npm run build
- tar -zcf dist.tar.gz dist
artifacts:
expire_in: 15 min
paths:
- dist.tar.gz
only:
- master

Related

Gitlab CI Include is not including

I don't get it. I have two repos. One for infrastructure, and the other for project code. Inside project code, I have .gitlab-ci.yml file that will include one job for creating env variables, another include file that will include other stages inside the job. Every other stage is triggered, but the first include is not triggered no matter what I do. What am I doing wrong?
Project gitlab-ci
# Stages
stages:
- pre-build test
- build
- post-build test
- deploy
- environment
- e2e
# Main Variables
variables:
GIT_SUBMODULE_STRATEGY: normal
IMAGE_VERSION: "latest"
CARGO_HOME: $CI_PROJECT_DIR/.cargo
FF_USE_FASTZIP: "true"
ARTIFACT_COMPRESSION_LEVEL: "fast"
CACHE_COMPRESSION_LEVEL: "fastest"
STAGING_BRANCH: "master"
VARIABLES_FILE: variables.txt
# Include main CI files from infrastructure repository
include:
- project: 'project/repo-one'
ref: master
file: '/gitlab-ci/env/ci-app-env.yml'
- project: 'project/repo-one'
ref: master
file: '/gitlab-ci/app/ci-merge-request.yml'
env ci file
env mr:
stage: .pre
before_script:
- TEST_VAR="TEST"
- IMAGE_PATH="/var/www"
script:
- echo "export TEST_VAR=$TEST_VAR" > $VARIABLES_FILE
- echo "export IMAGE_PATH=$IMAGE_PATH" > $VARIABLES_FILE
- cat $VARIABLES_FILE
artifacts:
paths:
- $VARIABLES_FILE

cmake does not (always) order Fortran modules correctly

I have a code using Fortran modules. I can build it with no problems under normal circumstances. CMake takes care of the ordering of the module files.
However, using a gitlab runner, it SOMETIMES happens that cmake does NOT order the Fortran modules by dependencies, but alphabetically instead, which than leads to a build failure.
The problem seems to occur at random. I have a branch that built in the CI. After adding a commit, that modified a utility script not involved in any way in the build, I ran into this problem. There is no difference in the output of the cmake configure step.
I use the matrix configuration for the CI to test different configurations. I found, that I could trigger this by adding another mpi version (e.g. openmpi/4.1.6). Without that version, it built. With it added in the matrix, ALL configurations showed the problem.
stages:
- configure
- build
- test
.basic_config:
tags:
- hpc_runner
variables:
# load submodules
GIT_SUBMODULE_STRATEGY: recursive
.config_matrix:
extends: .basic_config
# define job matrix
parallel:
matrix:
- COMPILER: [gcc/9.4.0]
PARALLELIZATION: [serial, openmpi/3.1.6]
TYPE: [option1, option2]
BUILD_TYPE: [debug, release]
- COMPILER: [gcc/10.3.0, intel/19.0.5]
PARALLELIZATION: [serial]
TYPE: [option2]
BUILD_TYPE: [debug]
###############################################################################
# setup script
# These commands will run before each job.
before_script:
- set -e
- uname -a
- |
if [[ "$(uname)" = "Linux" ]]; then
export THREADS=$(nproc --all)
elif [[ "$(uname)" = "Darwin" ]]; then
export THREADS=$(sysctl -n hw.ncpu)
else
echo "Unknown platform. Setting THREADS to 1."
export THREADS=1
fi
# load environment
- source scripts/build/load_environment $COMPILER $BUILD_TYPE $TYPE $PARALLELIZATION
# set path for build folder
- build_path=build/$COMPILER/$PARALLELIZATION/$TYPE/$BUILD_TYPE
configure:
stage: configure
extends: .config_matrix
script:
- mkdir -p $build_path
- cd $build_path
- $CMAKE_COMMAND
artifacts:
paths:
- build
expire_in: 1 days
###############################################################################
# build script
build:
stage: build
extends: .config_matrix
script:
- cd $build_path
- make
artifacts:
paths:
- build
expire_in: 1 days
needs:
- configure
###############################################################################
# test
test:
stage: test
extends: .config_matrix
script:
- cd $build_path
- ctest --output-on-failure
needs:
- build
The runner runs on an HPC machine which a complex setup, and I am not to familiar with the exact configuration. I contacted the admin with this problem, but wanted to see if anybody else had run into this before and have solutions or hints on what is going on.
With the help from our admin I figured it out.
The problem comes from cmake using absolute paths. The runner has actually several runners for parallel jobs, with each using a different prefix path, e.g. /runner/001/ or /runner/012/. So when I run configure on a specific runner, cmake saves that prefix path to the configuration.
Now in the build stage, there is no guarantee to have the same configuration run on the same runner. However, since there are absolute paths in the make files, make tries to access the folders in the configure runner's prefix. Now, that can be anything from non-existing, over old files from previous pipelines to the correct files downloaded by another case.
The only fix I currently can see is to run everything on the same runner in one stage, to avoid the roulette of prefix paths. If anybody has a different idea, or if there is a way to fix a specific matrix case to a specific runner prefix, please comment.

Variables unavailable when running a TAG build

I have a CI pipeline in Gitlab (relevant part only):
default:
image: docker:latest
variables:
DOCKER_APP_TAG: ${REGISTRY_URL}/${APP_NAME}:${CI_COMMIT_SHORT_SHA}
stages:
- build
.config:
only:
- branches
- merge_requests
- tags
except:
- triggers
tags:
- prod
build-app:
extends: .config
stage: build
script:
- docker build --target production -t ${DOCKER_APP_TAG} -f ${CI_PROJECT_DIR}/etc/node/Dockerfile .
When I build from a branch (i.e. push to main branch) everything works well. The docker build command is ran with the proper value available in S{DOCKER_APP_TAG}.
However after I create a TAG in GitLab (and a release), the build on this GitLab TAG fails at the docker build ... line complaining that the docker tag is not valid:
invalid argument "/:e5dc27fd" for "-t, --tag" flag: invalid reference format
The variables ${REGISTRY_URL} and ${APP_NAME} are not expanded. I have checked GitLab docs and the only limitations I see is if I was running in a service. But it is not the case.
What am I missing to expand properly the variables even with tag builds?

how to config lfs.fetchinclude in gitlabci

I want to git lfs fetch only in some dir in the gitlab CI. but failed
the gitlab-runner was 11.8.0~beta.1077
i config like this:
variables:
# Please edit to your GitLab project
GIT_STRATEGY: clone
GIT_CHECKOUT: "false"
script:
- git config lfs.fetchinclude "xxx/xxx/, test/"
but ci erro:
root config contains unknown keys: script
how to fix it?
The first part is your syntax error - the script key must be part of a job, e.g. like this:
build:
stage: build
script:
- git config ...
However, I think the CI runner will fetch LFS files automatically, and the script is only run after cloning.
So I think you have to disable automatic fetching before, & then this might work:
build:
stage: build
before_script:
- git config lfs.fetchinclude "xxx/xxx/, test/"
- git lfs pull

Gitlab run pipeline job only when previous job ran

I'm trying to create a pipeline with a production and a development deployment. In both environments the application should be built with docker. But only when something changed in the according directory.
For example:
When something changed in the frontend directory the frontend should be build and deployed
When something changed in the backend directory the backend should be build and deployed
At first I didn't had the needs: keyword. The pipeline always executed the deploy_backend and deploy_frontend even when the build jobs were not executed.
Now I've added the needs: keyword, but Gitlab says yaml invalid when there was only a change in one directory. When there is a change in both directories the pipeline works fine. When there for exaple a change in the README.md outside the 2 directories the says yaml invalid as well.
Does anyone knows how I can create a pipeline that only runs when there is a change in a specified directory and only runs the according deploy job when the build job has ran?
gitlab-ci.yml:
stages:
- build
- deploy
build_frontend:
stage: build
only:
refs:
- master
- development
changes:
- frontend/*
script:
- cd frontend
- docker build -t frontend .
build_backend:
stage: build
only:
refs:
- master
- development
changes:
- backend/*
script:
- cd backend
- docker build -t backend .
deploy_frontend_dev:
stage: deploy
only:
refs:
- development
script:
- "echo deploy frontend"
needs: ["build_frontend"]
deploy_backend_dev:
stage: deploy
only:
refs:
- development
- pipeline
script:
- "echo deploy backend"
needs: ["build_backend"]
The problem here is that your deploy jobs require the previous build jobs to actually exist.
However, by using the only.changes-rule, they only exist if actually something changed within those directories.
So when only something in the frontend-folder changed, the build_backend-Job is not generated at all. But the deploy_backend_dev job still is and then misses it's dependency.
A quick fix would be to add the only.changes configuration also to the deployment-jobs like this:
deploy_frontend_dev:
stage: deploy
only:
refs:
- development
changes:
- frontend/*
script:
- "echo deploy frontend"
needs: ["build_frontend"]
deploy_backend_dev:
stage: deploy
only:
refs:
- development
- pipeline
changes:
- backend/*
script:
- "echo deploy backend"
needs: ["build_backend"]
This way, both jobs will only be created if the dependent build job is created as well and the yaml will not be invalid.