I am running a local gitlab server with self-signed certificate, My pipline builds my application and create a release but I have x509 I tried the workaround mentionned on gitlab documenation but it doesn't work. Everything works fine when tested in gitlab.com
To summerize first I build my application to generate a war file as an artifact, then the artifact is uploaded using gitlab API to generate URL and file path after that release job add tags and generate the release page
my gitlab-ci.yaml
---
variables:
PACKAGE_VERSION: "V7"
GENERIC_WAR: "mypackage-${PACKAGE_VERSION}.war"
PACKAGE_REGISTRY_URL: "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/${CI_PROJECT_NAME}/${PACKAGE_VERSION}"
workflow:
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: always
variables:
SERVER: "${PROD_SERVER}"
- if: $CI_COMMIT_BRANCH == "test"
when: always
variables:
SERVER: "${TEST_SERVER}"
- if: $CI_COMMIT_BRANCH == "feature/release"
when: always
variables:
SERVER: "${TEST_SERVER}"
stages:
- build
- upload
- prepare
- release
- deploy
build-application:
stage: build
image: maven:3.8.4-jdk-8
script:
- mvn clean package -U -DskipTests=true
- echo $CI_COMMIT_TAG
artifacts:
expire_in: 2h
when: always
paths:
- target/*.war
upload:
stage: upload
image: curlimages/curl:latest
needs:
- job: build-application
artifacts: true
# rules:
# - if: $CI_COMMIT_TAG
script:
- |
curl -k --header "JOB-TOKEN: ${CI_JOB_TOKEN}" --upload-file target/*.war "${PACKAGE_REGISTRY_URL}/${GENERIC_WAR}"
prepare_job:
stage: prepare
rules:
- if: $CI_COMMIT_TAG
when: never
- if: $CI_COMMIT_BRANCH == "feature/release"
script:
- echo "EXTRA_DESCRIPTION=some message" >> variables.env # Generate the EXTRA_DESCRIPTION and TAG environment variables
- echo "TAG=v$(cat VERSION)" >> variables.env
artifacts:
reports:
dotenv: variables.env
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
needs:
- job: prepare_job
artifacts: true
rules:
- if: $CI_COMMIT_TAG
when: never
- if: $CI_COMMIT_BRANCH == "feature/release"
before_script:
- apk --no-cache add openssl ca-certificates
- mkdir -p /usr/local/share/ca-certificates/extra
- openssl s_client -connect ${CI_SERVER_HOST}:${CI_SERVER_PORT} -servername ${CI_SERVER_HOST} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | tee "/usr/local/share/ca-certificates/${CI_SERVER_HOST}.crt" >/dev/null
- update-ca-certificates
script:
- echo 'running release_job for $TAG'
release:
name: "Release $TAG"
description: "Created using the release-cli $EXTRA_DESCRIPTION"
tag_name: "$TAG"
ref: "$CI_COMMIT_SHA"
assets:
links:
- name: "{$GENERIC_WAR}"
url: "${PACKAGE_REGISTRY_URL}"
filepath: "/${GENERIC_WAR}"
Release job execution
Running with gitlab-runner 14.5.2 (e91107dd)
on Shared-Docker mdaS6_cA
Preparing the "docker" executor
00:03
Using Docker executor with image registry.gitlab.com/gitlab-org/release-cli:latest ...
Pulling docker image registry.gitlab.com/gitlab-org/release-cli:latest ...
Using docker image sha256:c2d3a3c3b9ad5ef63478b6a6b757632dd7994d50e603ec69999de6b541e1dca8 for registry.gitlab.com/gitlab-org/release-cli:latest with digest registry.gitlab.com/gitlab-org/release-cli#sha256:68e201226e1e76cb7edd327c89eb2d5d1a1d2b0fd4a6ea5126e24184d9aa4ffc ...
Preparing environment
00:01
Running on runner-mdas6ca-project-32-concurrent-0 via Docker-Server1...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/Saiida/backend-endarh/.git/
Checking out 7735e9ea as feature/release...
Removing target/
Removing variables.env
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:02
Using docker image sha256:c2d3a3c3b9ad5ef63478b6a6b757632dd7994d50e603ec69999de6b541e1dca8 for registry.gitlab.com/gitlab-org/release-cli:latest with digest registry.gitlab.com/gitlab-org/release-cli#sha256:68e201226e1e76cb7edd327c89eb2d5d1a1d2b0fd4a6ea5126e24184d9aa4ffc ...
$ apk --no-cache add openssl ca-certificates
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/2) Installing ca-certificates (20191127-r5)
(2/2) Installing openssl (1.1.1l-r0)
Executing busybox-1.32.1-r6.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 7 MiB in 16 packages
$ mkdir -p /usr/local/share/ca-certificates/extra
$ openssl s_client -connect ${CI_SERVER_HOST}:${CI_SERVER_PORT} -servername ${CI_SERVER_HOST} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | tee "/usr/local/share/ca-certificates/${CI_SERVER_HOST}.crt" >/dev/null
$ update-ca-certificates
Warning! Cannot copy to bundle: /usr/local/share/ca-certificates/extra
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-extra.pem does not contain exactly one certificate or CRL: skipping
$ echo 'running release_job for $TAG'
running release_job for $TAG
Executing "step_release" stage of the job script
00:01
$ release-cli create --name "Release $TAG" --description "Created using the release-cli $EXTRA_DESCRIPTION" --tag-name "$TAG" --ref "$CI_COMMIT_SHA" --assets-link "{\"url\":\"${PACKAGE_REGISTRY_URL}\",\"name\":\"{$GENERIC_WAR}\",\"filepath\":\"/${GENERIC_WAR}\"}"
time="2021-12-23T08:47:48Z" level=info msg="Creating Release..." cli=release-cli command=create name="Release v" project-id=32 ref=7735e9ea9422e20b09cae2072c692843b118423a server-url="https://gitlab.endatamweel.tn" tag-name=v version=0.10.0
time="2021-12-23T08:47:48Z" level=fatal msg="run app" cli=release-cli error="failed to create release: failed to do request: Post \"https://gitlab.endatamweel.tn/api/v4/projects/32/releases\": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0" version=0.10.0
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
I managed to get it work by replacing the yaml format of the release job with the release-cli command and arguments and set --insecure-https option not optimised for production of course
release:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
needs:
- job: prepare_job
artifacts: true
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == "feature/release" # Run this job when commits are pushed or merged to the default branch
script:
- |
release-cli --insecure-https=true create --name "Release $TAG" --tag-name $TAG --ref $CI_COMMIT_SHA \
--assets-link "{\"name\":\"${GENERIC_WAR}\",\"url\":\"${PACKAGE_REGISTRY_URL}/${GENERIC_WAR}\", \"link_type\":\"package\"}"
Related
I've trying to continuously improve our gitlab pipeline for a small project, add more automation and speed up deployment time. Currently, there's several challenges I've faced.
We are using release-cli in order to create specific tags for every environment - prod, stage and dev. Because of its nature, it's changed the stages form the new one after every run. So after every job we need to manually trigger "package" stage from the new workflow (see attached screenshot). I wonder how can we use release-cli more effectively and what should I add into pipeline in order to fully automate its run, at least for production environment?
Here's the code:
stages:
- install
- release
- package
- deploy
### FOR MAIN BRANCH
.main-branch:
only:
- main
.using-cache:
cache:
key: main
paths:
- node_modules
policy: pull
install prod:
extends: .main-branch
tags:
- canary
stage: install
script:
- echo "Install"
### RELEASE
patch release prod:
tags:
- canary
extends: .main-branch
needs: [ "install prod" ]
stage: release
before_script:
- TAG=$(git describe origin/$CI_COMMIT_REF_NAME --abbrev=0 --tags )
- NEW_VERSION=${TAG%.${TAG##*.}}.$CI_COMMIT_SHORT_SHA
script:
- echo "Running the release job. Set tag $NEW_VERSION"
- release-cli create
--name "Auto-release v$NEW_VERSION"
--description "Auto-release v$NEW_VERSION"
--tag-name "$NEW_VERSION"
package prod:
extends: .main-branch
tags:
- canary
stage: package
when: manual
script:
- docker-compose build
- docker-compose push
only:
- tags
deploy production:
tags:
- canary
only:
- tags
needs: [ "package prod" ]
extends: .main-branch
stage: deploy
environment:
name: production
url: https://saharok.store/
action: start
script:
- kubectl config use-context zefir-projects/saharok-monorepo:saharok
- kubectl create secret generic server-stable --from-env-file=$PROD_ENV_FILE -o yaml --dry-run=client | kubectl apply -f -
- helm upgrade client-stable ./.helm/client -i --set image.tag=$CI_COMMIT_TAG --kube-context zefir-projects/saharok-monorepo:saharok
- helm upgrade server-stable ./.helm/server -i --set image.tag=$CI_COMMIT_TAG --kube-context zefir-projects/saharok-monorepo:saharok
### FOR STAGE BRANCH
.stage-branch:
only:
- stage
.using-cache stage:
cache:
key: stage
paths:
- node_modules
policy: pull
install stage:
extends: .stage-branch
tags:
- canary
stage: install
script:
- echo "Install"
patch release stage:
tags:
- canary
extends: .stage-branch
stage: release
needs: [ "install stage" ]
before_script:
- TAG=$(git describe origin/$CI_COMMIT_REF_NAME --abbrev=0 --tags )
- NEW_VERSION=${TAG%.${TAG##*.}}.$CI_COMMIT_SHORT_SHA
script:
- echo "Running the release job. Set tag $NEW_VERSION"
- release-cli create
--name "Auto-release v$NEW_VERSION"
--description "Auto-release v$NEW_VERSION"
--tag-name "$NEW_VERSION"
package stage:
extends: .stage-branch
tags:
- canary
stage: package
when: manual
script:
- docker-compose build
- docker-compose push
only:
- tags
deploy staging:
tags:
- canary
only:
- tags
needs: [ "package stage" ]
extends: .stage-branch
stage: deploy
when: manual
environment:
name: staging
url: https://staging.saharok.store/
action: start
script:
- kubectl config use-context zefir-projects/saharok-monorepo:saharok
- kubectl create secret generic server-staging --from-env-file=$STAGE_ENV_FILE -o yaml --dry-run=client | kubectl apply -f -
- helm upgrade client-staging ./.helm/client -i --set image.tag=$CI_COMMIT_TAG -f ./.helm/client/values.staging.yaml --kube-context zefir-projects/saharok-monorepo:saharok
- helm upgrade server-staging ./.helm/server -i --set image.tag=$CI_COMMIT_TAG -f ./.helm/server/values.staging.yaml --kube-context zefir-projects/saharok-monorepo:saharok
### FOR DEV BRANCH
install dev:
tags:
- canary
stage: install
when: manual
script:
- echo "Install"
patch release dev:
tags:
- canary
stage: release
needs: [ "install dev" ]
before_script:
- TAG=$(git describe origin/$CI_COMMIT_REF_NAME --abbrev=0 --tags --always )
- NEW_VERSION=${TAG%.${TAG##*.}}.$CI_COMMIT_SHORT_SHA
script:
- echo "Running the release job. Set tag $NEW_VERSION"
- release-cli create
--name "Auto-release v$NEW_VERSION for dev"
--description "Auto-release v$NEW_VERSION for dev"
--tag-name "$NEW_VERSION"
package dev:
tags:
- canary
stage: package
when: manual
script:
- docker-compose build
- docker-compose push
only:
- tags
deploy dev:
tags:
- canary
only:
- tags
stage: deploy
needs: [ "package dev" ]
environment:
name: dev
url: https://dev.saharok.store/
action: start
script:
- kubectl config use-context zefir-projects/saharok-monorepo:saharok
- kubectl create secret generic server-dev --from-env-file=$DEV_ENV_FILE -o yaml --dry-run=client | kubectl apply -f -
- helm upgrade --install client-dev ./.helm/client -i --set image.tag=$CI_COMMIT_TAG -f ./.helm/client/values.dev.yaml --kube-context zefir-projects/saharok-monorepo:saharok
- helm upgrade --install server-dev ./.helm/server -i --set image.tag=$CI_COMMIT_TAG -f ./.helm/server/values.dev.yaml --kube-context zefir-projects/saharok-monorepo:saharok
I'm using the gitlab-ci (13.9) to test and build a react project.
On the branch develop everything works fine.
On the branch validation, the build job can't install a private package:
[2/5] Resolving packages...
error An unexpected error occurred: "https://registry.yarnpkg.com/#company%2fname-of-my-package: Not found".
info If you think this is a bug, please open a bug report with the information provided in "/builds/code/conference/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
error Command failed with exit code 1.
The .gitlab-ci.yml is the same for both branches:
variables:
DOCKER_DRIVER: overlay2
GIT_SSL_NO_VERIFY: 'true'
DOCKER_TLS_CERTDIR: ''
stages:
- install
- test
- build
install_dependencies:
image: node:lts-alpine
stage: install
before_script:
- apk update && apk add git openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
artifacts:
expire_in: 1 hour
paths:
- node_modules/
script:
- yarn install
test-job:
image: node:lts-alpine
stage: test
script:
- yarn run test
build-job:
image: node:lts-alpine
stage: build
only:
- develop
- validation
artifacts:
expire_in: 1 hour
paths:
- dist/
script:
- yarn run build
The package.json is the same for both branches.
Both branches are protected.
develop is the project default branch.
There is no error log available /builds/code/conference/yarn-error.log
There is no specific variable config in .gitlab-ci for develop
What could cause this to fail ?
I managed to make my CI pass on the branch validation by copying the ssh/npmrc configuration in my build-job:
variables:
DOCKER_DRIVER: overlay2
GIT_SSL_NO_VERIFY: 'true'
DOCKER_TLS_CERTDIR: ''
stages:
- install
- test
- build
- docker-build-push
install_dependencies:
image: node:lts-alpine
stage: install
before_script:
- apk update && apk add git openssh-client
# run ssh agent
- eval $(ssh-agent -s)
# add ssh key stored in gitlab ci variables
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
artifacts:
expire_in: 1 hour
paths:
- node_modules/
- .npmrc
script:
- yarn install
test-job:
image: node:lts-alpine
stage: test
script:
- yarn run test
build-job:
image: node:lts-alpine
stage: build
only:
- develop
- validation
artifacts:
expire_in: 1 hour
paths:
- dist/
before_script:
- apk update && apk add git openssh-client
# run ssh agent
- eval $(ssh-agent -s)
# add ssh key stored in gitlab ci variables
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
script:
- yarn run build
docker-job:
services:
- docker:dind
image: docker:18.09.9
stage: docker-build-push
only:
- develop
- validation
before_script:
- apk update && apk add git rsync curl jq
- docker login -u gitlab-ci-token -p ${PUBLISH_KEY} registry.apizee.com
script:
- docker login -u gitlab-ci-token -p ${PUBLISH_KEY} registry.apizee.com
- /bin/sh docker/init.sh
- docker push registry.apizee.com/docker/apizee-rancher/conf4:${CI_COMMIT_REF_NAME}
- '[[ -f "docker/deploy.sh" ]] && sh docker/deploy.sh "${CI_COMMIT_REF_NAME}"'
So there might be a default cache/artifacts setting on the default branch and not on other branches ?
hello I'm trying to do a CI / CD integration of zap on gitlab. To do this, I wrote the following code but after running the report is not generated. What to do please.
the scan is well done but the report is not available
test_site:
stage: test
image: owasp/zap2docker-stable
when: always
script:
- mkdir -p /zap/wrk/
- zap-full-scan.py -t https://example.com -r report.html
- cp /zap/wrk/report.html .
artifacts:
when: always
paths: [report.html]
allow_failure: false
I managed to generate the report by adding to the command || true or the -I option but the objective is to generate the report without adding
I have struggled to get this done for more than 2 days. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html
Hi I have a yml as below but i want to run specific stage for only specific branch names like release candidate. The name of the release branch can change like cis-rel1.0 the next time cis-rel2.0 and so on.
image: java:8
stages:
- build
- deploy
build:
stage: build
script: ./mvnw package
artifacts:
paths:
- target/demo-0.0.1-SNAPSHOT.jar
production:
stage: deploy
script:
- curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
- ./cf login -u $CF_USERNAME -p $CF_PASSWORD -a api.run.pivotal.io
- ./cf push
only:
- cis-rel1.0
Yes you can achieve that using the regex pattern in .gitlab-ci.yml as shown below . This regex will filter for your project name /^cis-rel.*$/
image: java:8
stages:
- build
- deploy
build:
stage: build
script: ./mvnw package
artifacts:
paths:
- target/demo-0.0.1-SNAPSHOT.jar
production:
stage: deploy
script:
- curl --location "https://cli.run.pivotal.io/stable?release=linux64-binary&source=github" | tar zx
- ./cf login -u $CF_USERNAME -p $CF_PASSWORD -a api.run.pivotal.io
- ./cf push
only:
- /^cis-rel.*$/
I am searching for a way to use kubectl in gitlab.
So far I have the following script:
deploy_to_dev:
stage: deploy
image: docker:dind
environment:
name: dev
script:
- mkdir -p $HOME/.kube
- echo $KUBE_CONFIG | base64 -d > $HOME/.kube/config
- kubectl config view
only:
- develop
But it says that gitlab does not know kubectl. So can you point me in the right direction.
You are using docker:dindimage which does not have the kubectl binary, you should bring your own image with the binary or download it in the process
deploy_to_dev:
stage: deploy
image: alpine:3.7
environment:
name: dev
script:
- apk update && apk add --no-cache curl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl && mv ./kubectl /usr/local/bin/kubectl
- mkdir -p $HOME/.kube
- echo -n $KUBE_CONFIG | base64 -d > $HOME/.kube/config
- kubectl config view
only:
- develop
Use image google/cloud-sdk which has a preinstalled installation of gcloud and kubectl.
build:
stage: build
image: google/cloud-sdk
services:
- docker:dind
script:
# Make gcloud available
- source /root/.bashrc