Append the package.json version number to my build artifact in aws-codebuild - aws-codebuild

I really dont know if this is a simple (must be), common or complex task.
I have a buildspec.yml file in my codebuild project, and i am trying to append the version written in package.json file to the output artifact.
I have already seen a lot of tutorials that teach how to append the date (not really useful to me), and others that tell me to execute a version.sh file with this
echo $(sed -nr 's/^\s*"version": "([0-9]{1,}.[0-9]{1,}.*)",$/\1/p' package.json)
and set it in a variable (it doesn't work).
i'm ending up with a build folder called: "my-project-$(version.sh)"
codebuild environment uses ubuntu and nodejs
Update (solved):
my version.sh file:
#!/usr/bin/env bash
echo $(sed -nr 's/^\s*\"version": "([0-9]{1,}\.[0-9]{1,}.*)",$/\1/p' package.json)
Then, i just found out 2 things:
Allow access to your version.sh file:
git update-index --add --chmod=+x version.sh
Declare a variable in any phase in buildspec, i dit in in build phase (just to make sure repository is already copied in environment)
TAGG=$($CODEBUILD_SRC_DIR/version.sh)
then reference it in artifact versioned name:
artifacts:
files:
- '**/*'
name: workover-frontend-$TAG
As result, my build artifact's name: myproject-1.0.0

In my case this script do not want to fetch data from package.json. On my local machine it working great but on AWS doesn't. I had to use chmod in different way, because i got message that i don't have right permissions. My buildspec:
version: 0.2
env:
variables:
latestTag: ""
phases:
pre_build:
commands:
- "echo sed version"
- sed --version
build:
commands:
- chmod +x version.sh
- latestTag=$($CODEBUILD_SRC_DIR/version.sh)
- "echo $latestTag"
artifacts:
files:
- '**/*'
discard-paths: yes
And results in console:
CodeBuild
I also have to mark that when i paste only for example echo 222 into version.sh file i got right answer in CodeBuild console.

Related

cmake does not (always) order Fortran modules correctly

I have a code using Fortran modules. I can build it with no problems under normal circumstances. CMake takes care of the ordering of the module files.
However, using a gitlab runner, it SOMETIMES happens that cmake does NOT order the Fortran modules by dependencies, but alphabetically instead, which than leads to a build failure.
The problem seems to occur at random. I have a branch that built in the CI. After adding a commit, that modified a utility script not involved in any way in the build, I ran into this problem. There is no difference in the output of the cmake configure step.
I use the matrix configuration for the CI to test different configurations. I found, that I could trigger this by adding another mpi version (e.g. openmpi/4.1.6). Without that version, it built. With it added in the matrix, ALL configurations showed the problem.
stages:
- configure
- build
- test
.basic_config:
tags:
- hpc_runner
variables:
# load submodules
GIT_SUBMODULE_STRATEGY: recursive
.config_matrix:
extends: .basic_config
# define job matrix
parallel:
matrix:
- COMPILER: [gcc/9.4.0]
PARALLELIZATION: [serial, openmpi/3.1.6]
TYPE: [option1, option2]
BUILD_TYPE: [debug, release]
- COMPILER: [gcc/10.3.0, intel/19.0.5]
PARALLELIZATION: [serial]
TYPE: [option2]
BUILD_TYPE: [debug]
###############################################################################
# setup script
# These commands will run before each job.
before_script:
- set -e
- uname -a
- |
if [[ "$(uname)" = "Linux" ]]; then
export THREADS=$(nproc --all)
elif [[ "$(uname)" = "Darwin" ]]; then
export THREADS=$(sysctl -n hw.ncpu)
else
echo "Unknown platform. Setting THREADS to 1."
export THREADS=1
fi
# load environment
- source scripts/build/load_environment $COMPILER $BUILD_TYPE $TYPE $PARALLELIZATION
# set path for build folder
- build_path=build/$COMPILER/$PARALLELIZATION/$TYPE/$BUILD_TYPE
configure:
stage: configure
extends: .config_matrix
script:
- mkdir -p $build_path
- cd $build_path
- $CMAKE_COMMAND
artifacts:
paths:
- build
expire_in: 1 days
###############################################################################
# build script
build:
stage: build
extends: .config_matrix
script:
- cd $build_path
- make
artifacts:
paths:
- build
expire_in: 1 days
needs:
- configure
###############################################################################
# test
test:
stage: test
extends: .config_matrix
script:
- cd $build_path
- ctest --output-on-failure
needs:
- build
The runner runs on an HPC machine which a complex setup, and I am not to familiar with the exact configuration. I contacted the admin with this problem, but wanted to see if anybody else had run into this before and have solutions or hints on what is going on.
With the help from our admin I figured it out.
The problem comes from cmake using absolute paths. The runner has actually several runners for parallel jobs, with each using a different prefix path, e.g. /runner/001/ or /runner/012/. So when I run configure on a specific runner, cmake saves that prefix path to the configuration.
Now in the build stage, there is no guarantee to have the same configuration run on the same runner. However, since there are absolute paths in the make files, make tries to access the folders in the configure runner's prefix. Now, that can be anything from non-existing, over old files from previous pipelines to the correct files downloaded by another case.
The only fix I currently can see is to run everything on the same runner in one stage, to avoid the roulette of prefix paths. If anybody has a different idea, or if there is a way to fix a specific matrix case to a specific runner prefix, please comment.

Cannot locate docker build output of multistage build inside CodeBuild

We're using a aws/codebuild/standard:5.0 codebuild image to build our own docker images. I have a buildspec that calls docker build against our Dockerfile and push to ECR. The Dockerfile uses Microsoft dotnet base images to call dotnet pubish to build our binaries. This all works fine.
We then added a build stage to our Dockerfile to run unit tests (using dotnet test) and we followed the "FROM scratch" advice combined with docker build --output to try and pull unit test results files out of the multi-stage target:
docker build --target export-test-results -f ./Dockerfile --output type=local,dest=out .
This works fine locally (an out dir is created containing the files), but when I run this in Codebuild, I cannot find where the output may be (the command succeeds - but I've no idea where it's going). I've added ls commands everywhere, and cannot locate the out dir, so of course my artifacts step has nothing to archive.
Question is: where is the output being created inside the CodeBuild instance?
My (abbreviated) Dockerfile
ARG VERSION=3.1-alpine3.13
FROM mcr.microsoft.com/dotnet/aspnet:$VERSION AS base
WORKDIR /usr/local/bin
FROM mcr.microsoft.com/dotnet/sdk:$VERSION AS source
#Using pattern here to bypass need for recursive copy from local src folder: https://github.com/moby/moby/issues/15858#issuecomment-614157331
WORKDIR /usr/local
COPY . ./src
RUN mkdir ./proj && \
cd ./src && \
find . -type f -a \( -iname "*.sln" -o -iname "*.csproj" -o -iname "*.dcproj" \) -exec cp --parents "{}" ../proj/ \;
FROM mcr.microsoft.com/dotnet/sdk:$VERSION AS projectfiles
# Copy only the project files with correct directory structure
# then restore packages - this will mean that "restore" will be saved in a layer of its own
COPY --from=source /usr/local/proj /usr/local/src
FROM projectfiles AS restore
WORKDIR /usr/local/src/Postie
RUN dotnet restore --verbosity minimal -s https://api.nuget.org/v3/index.json Postie.sln
FROM restore AS unittests
#Copy all the source files
COPY --from=source /usr/local/src /usr/local/src
RUN cd Postie.Domain.UnitTests && \
dotnet test --no-restore --logger:nunit --verbosity normal || true
FROM scratch as export-test-results
COPY --from=unittests /usr/local/src/Postie/Postie.Domain.UnitTests/TestResults/TestResults.xml ./Postie.Domain.UnitTests.TestResults.xml
My (abbreviated) Buildspec:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY_SERVER
build:
commands:
- export IMAGE_TAG=:$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7).$CODEBUILD_BUILD_NUMBER
- export JENKINS_TAG=:$(echo $JENKINS_VERSION_NUMBER | tr '+' '-')
- echo Build started on `date` with version $IMAGE_TAG
- cd ./Src/
- echo Testing the Docker image...
#see the following for why we use the --output option
#https://docs.docker.com/engine/reference/commandline/build/#custom-build-outputs
- docker build --target export-test-results -t ${DOCKER_REGISTRY_SERVER}/postie.api${IMAGE_TAG} -f ./Postie/Postie.Api/Dockerfile --output type=local,dest=out .
artifacts:
files:
- '**/*'
name: builds/$JENKINS_VERSION_NUMBER/artifacts
(I should note that the "artifacts" step above is actually archiving my entire source tree to S3 so that I can prove that the upload is working and also so that I can try to find the "out" dir - but it's not to be found)
I know this is old, but just in case anyone else stumbles across this one, you need to add the Docker Buildkit variable to the CodeBuild environment, otherwise the files will not get exported.
version: 0.2
... etc
phases:
build:
commands:
... etc
- echo Testing the Docker image...
- export DOCKER_BUILDKIT=1
- docker build --target export-test-results ... etc
... etc
If you want to display more output along with this you can also add
- export BUILDKIT_PROGRESS=plain
- export PROGRESS_NO_TRUNC=1
under the buildkit variable.

How to run a script from file in another project using include in GitLab CI?

I'm trying to run a shell script from my template file located in another project via my include.
How should this be configured to work? Below scripts are simplified versions of my code.
Project A
template.yml
deploy:
before_script:
- chmod +x ./.run.sh
- source ./.run.sh
Project B
gitlab-ci.yml
include:
- project: 'project-a'
ref: master
file: '/template.yml'
stages:
- deploy
Clearly, the commands are actually being run from ProjectB and not ProjectA where the template resides. This can further be confirmed by adding ls -a in the template file.
So how should we be calling run.sh? Both projects are on the same GitLab instance under different groups.
If you have access project A and B, you can use multi-project pipelines. You trigger a pipeline in project A from project B.
In project A, you clone project B and run your script.
Project B
job 1:
variables:
PROJECT_PATH: "$CI_PROJECT_PATH"
RELEASE_BRANCH: "$CI_COMMIT_BRANCH"
trigger:
project: project-a
strategy: depend
Project A
job 2:
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline" && $PROJECT_PATH && $RELEASE_BRANCH'
script:
- git clone -b "${RELEASE_BRANCH}" --depth 50 https://gitlab-ci-token:${CI_JOB_TOKEN}#${CI_SERVER_HOST}/${PROJECT_PATH}.git $(basename ${PROJECT_PATH})
- cd $(basename ${PROJECT_PATH})
- chmod +x ../.run.sh
- source ../.run.sh
We've also run into this problem, and kinda wish Gitlab allowed includes to "import" non-yaml files. Nevertheless the simplest workaround we've found is to build a small docker image in repo A, which contains the script you want to run, and then repo B's job uses that docker image as the image, so the file run.sh is available :)
Minimal Dockerfile:
FROM bash:latest
COPY run.sh /usr/local/bin/
CMD run.sh
(Note: make sure you chmod +x run.sh before building your image, or add a RUN chmod +x /usr/local/bin/run.sh step)
Then, you'd just add this to your Project B's .gitlab-ci.yml:
stages:
- deploy
deploy:
image: registry.gitlab.com/... # Wherever you pushed your docker image to
script: run.sh
it's also possible to request a script by curl instead of copying a whole repository:
- curl -H "PRIVATE-TOKEN:$PRIVATE_TOKEN" --create-dirs "$CI_API_V4_URL/projects/$CI_DEPLOY_PROJECT_ID/repository/archive?path=pathToFolderWithScripts" -o $TEMP_DIR/archive.tar.gz
- tar zxvf $TEMP_DIR/archive.tar.gz -C $TEMP_DIR --strip-components 3
- bash $TEMP_DIR/run.sh
to make a curl request
to archive a folder with scripts
to unzip scripts in a temporary folder
to execute sh
ref This :: https://docs.gitlab.com/ee/api/repository_files.html#get-file-from-repository
GET /projects/:id/repository/files/:file_path/raw
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master"
it will display the file
to download this file just add >>
as below
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master" >> file.extension
As hinted by the answer above, multi project pipelines is the right approach for it.
Here's how it worked for me:
GroupX/ProjectA - contains reusable code
# .gitlab-ci.yml
stages:
- deploy
reusable_deploy_job:
stage: deploy
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline"' # run only if triggered by a pipeline
script:
- bash ./src/run.sh $UPSTREAM_CUSTOM_VARIABLE
GroupY/ProjectB - job that will reuse a code
# .gitlab-ci.yml
stages:
- deploy
deploy_job:
stage: deploy
variables:
UPSTREAM_CUSTOM_VARIABLE: CUSTOM_VARIABLE # pass this variable to downstream job
trigger: groupx/projecta

extends in Gitlab-ci pipeline

I'm trying to include a file in which I declare some repetitive jobs, I'm using extends.
I always have this error did not find expected key while parsing a block
this is the template file
.deploy_dev:
stage: deploy
image: nexus
script:
- ssh -i ~/.ssh/id_rsa -o "StrictHostKeyChecking=no" sellerbot#sb-dev -p 10290 'sudo systemctl restart mail.service'
only:
- dev
this is the main file
include:
- project: 'sellerbot/gitlab-ci'
ref: master
file: 'deploy.yml'
deploy_dev:
extends: .deploy_dev
Can anyone help me please
`
It looks like just stage: deploy has to be indented. In this case it's a good idea to use gilab CI line tool to check if CI pipeline code is valid or just YAML validator. When I checked section from template file in yaml linter I've got
(<unknown>): mapping values are not allowed in this context at line 3 column 8

How do we use the 'variables' keyword in gitlab-ci.yml?

I am trying to make use of the variables: keyword documented in the Gitlab CI Documentation here:
FROM: https://docs.gitlab.com/ce/ci/yaml/README.html
variables
This feature requires gitlab-runner with version equal or greater than
0.5.0.
GitLab CI allows you to add to .gitlab-ci.yml variables that are set
in build environment. The variables are stored in repository and are
meant to store non-sensitive project configuration, ie. RAILS_ENV or
DATABASE_URL.
variables:
DATABASE_URL: "postgres://postgres#postgres/my_database"
These variables can be later used in all executed commands and
scripts.
The YAML-defined variables are also set to all created service
containers, thus allowing to fine tune them.
When I attempt to use it, my builds do not run any stages and are marked successful anyway, a good sign of bad YAML. I pasted my gitlab-ci.yml contents into the LINT tool in the settings area and the output error is:
Status: syntax is incorrect
Error: variables job: unknown parameter PACKAGE_NAME
I'm using my YAML syntax the same as the docs, however it will not work. I'm unable to find any open bugs related to this. Below are my current versions and a sanitized version of my gitlab-ci.yml.
Gitlab Version: 7.13.2 Omnibus
Gitlab Runner Version: 0.5.2
gitlab-ci.yml (Sanitized)
types:
- test
- build
variables:
PACKAGE_NAME: "awesome-django-app"
PACKAGE_SUMMARY: "Awesome webapp backend."
MAJOR_RELEASE: "1"
MINOR_RELEASE: "0"
PATCH_LEVEL: "0dev"
DEV_DB_URL: "db"
DEV_SERVER: "pydev.example.com"
PROD_SERVER: "pyprod.example.com"
TEST_SERVER: "pytest.example.com"
envtest:
type: test
script:
- ". ./testbuild.sh"
tags:
- python2.7
- postgres
- linux
except:
- tags
buildrpm:
type: build
script:
- mkdir -p ~/rpmbuild/SOURCES
- mkdir -p ~/rpmbuild/SPECS
- mkdir -p ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL
- cp $PACKAGE_NAME.spec ~/rpmbuild/SPECS/.
- cp -r * ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL/.
- cd ~/tarbuild
- tar -zcf ~/rpmbuild/SOURCES/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL.tar.gz *
- cd ~
- rm -Rf ~/tarbuild
- rpmlint -i ~/rpmbuild/SPECS/$PACKAGE_NAME.spec
- echo $CI_BUILD_ID
- 'rpmbuild -ba ~/rpmbuild/SPECS/$PACKAGE_NAME.spec \
--define="_build_number $CI_BUILD_ID" \
--define="_python_version_min 2.7" \
--define="_version $MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL" \
--define="_package_name $PACKAGE_NAME" \
--define="_summary $SUMMARY"'
- scp rpmbuild/RPMS/noarch/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL-$CI_BUILD_ID.noarch.rpm $DEV_SERVER:~/.
tags:
- python2.7
- postgres
- linux
- rpm
except:
- tags
Question:
How do I use this value properly?
Additional Info:
Removing this section from the YAML file causes everything to work so the rest of the file is in working order. (Of course undefined variables lead to script errors...)
Even just reducing the variables for testing down to just PACKAGE_NAME causes the same break.
The original answer is no longer correct.
The original documentation now stands, Now there are more ways as well. Variables can be created from the GUI, API, or by being defined in the .gitlab-ci.yml as well.
https://docs.gitlab.com/ce/ci/variables/README.html
While it is in the documentation, I do not believe that variables were included in the latest version of gitlab (7.13). The functionality to read variables out of the yaml files was brought in by a commit by ayufan 9 days ago.
Looking at the parser on the 7.13 stable branch, you can see that his contribution did not make it in. So assuming you're on 7.13 or earlier, I'm afraid we are out of luck. Since it is on master, I am fairly certain that we'll see it in the next release. Until then, we could either monkey patch, do a git pull if you're using the source directly, or just rely on the project variables until the next release.