My Gitlab CI generate some files from unittest resul and i need move this files to gilab runner host...
Theare are some way to copy files to my gitlab ci to another git runner?
All files are processed and generated in the gitlab-runner host.
Gitlab runner copies your repo and works with it on the destination host.
Job Artifacts generated are stored on gitlab and stored between executions. You can accces them from the gitlab-runner as another part of your repository.
For example:
artifact_download:
stage: test
script:
- 'curl --location --output artifacts.zip --header "JOB-TOKEN: $CI_JOB_TOKEN" "https://gitlab.example.com/api/v4/projects/1/jobs/42/artifacts"'
Here are the docs that explain how to access them from the yml
Related
I'm learning GitLab CI/CD, I want to when finished build send files in artifacts, the idea is possible?
image: maven:3.8.1-jdk-11
stages:
- build
- deploy
build:
stage: build
script:
- mvn clean install
artifacts:
paths:
- "*/target/*.jar"
deploy:
stage: deploy
script:
- scp -r <artifacts_path> root#test.com:~/Deploy
Could I get artifacts real path in runner then send files with scp?
Generally speaking, no. You must rely on artifact restoration process. Keep in mind that (1) artifacts are generally not stored on the runner and (2) docker runners execute jobs inside of a docker container and typically would not have access to files on the runner host, even if artifacts were stored there.
When jobs start, artifacts from previous stages are restored into the workspace.
So, as an alternative solution, you can simply start with an empty workspace (don't checkout the repo), then upload all files in the workspace, which should be only the restored artifacts, assuming there are no file-based variables.
deploy:
variables: # prevent checkout of repository
GIT_STRATEGY: none
stage: deploy
script:
- ls -laht # list files, which should be just restored artifacts
- scp -r ./ root#test.com:~/Deploy
Another way might be to just use the same glob pattern used in the artifacts:paths: to find the files and upload them.
variables:
ARTIFACTS_PATTERN: "*/target/*.jar"
build:
# ...
artifacts:
paths:
- $ARTIFACTS_PATTERN
deploy:
script: # something like this. Not sure if scp supports glob patterns
- rsync -a -m --include="$ARTIFACTS_PATTERN" user#remote:~/Deploy
I'm trying to run a shell script from my template file located in another project via my include.
How should this be configured to work? Below scripts are simplified versions of my code.
Project A
template.yml
deploy:
before_script:
- chmod +x ./.run.sh
- source ./.run.sh
Project B
gitlab-ci.yml
include:
- project: 'project-a'
ref: master
file: '/template.yml'
stages:
- deploy
Clearly, the commands are actually being run from ProjectB and not ProjectA where the template resides. This can further be confirmed by adding ls -a in the template file.
So how should we be calling run.sh? Both projects are on the same GitLab instance under different groups.
If you have access project A and B, you can use multi-project pipelines. You trigger a pipeline in project A from project B.
In project A, you clone project B and run your script.
Project B
job 1:
variables:
PROJECT_PATH: "$CI_PROJECT_PATH"
RELEASE_BRANCH: "$CI_COMMIT_BRANCH"
trigger:
project: project-a
strategy: depend
Project A
job 2:
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline" && $PROJECT_PATH && $RELEASE_BRANCH'
script:
- git clone -b "${RELEASE_BRANCH}" --depth 50 https://gitlab-ci-token:${CI_JOB_TOKEN}#${CI_SERVER_HOST}/${PROJECT_PATH}.git $(basename ${PROJECT_PATH})
- cd $(basename ${PROJECT_PATH})
- chmod +x ../.run.sh
- source ../.run.sh
We've also run into this problem, and kinda wish Gitlab allowed includes to "import" non-yaml files. Nevertheless the simplest workaround we've found is to build a small docker image in repo A, which contains the script you want to run, and then repo B's job uses that docker image as the image, so the file run.sh is available :)
Minimal Dockerfile:
FROM bash:latest
COPY run.sh /usr/local/bin/
CMD run.sh
(Note: make sure you chmod +x run.sh before building your image, or add a RUN chmod +x /usr/local/bin/run.sh step)
Then, you'd just add this to your Project B's .gitlab-ci.yml:
stages:
- deploy
deploy:
image: registry.gitlab.com/... # Wherever you pushed your docker image to
script: run.sh
it's also possible to request a script by curl instead of copying a whole repository:
- curl -H "PRIVATE-TOKEN:$PRIVATE_TOKEN" --create-dirs "$CI_API_V4_URL/projects/$CI_DEPLOY_PROJECT_ID/repository/archive?path=pathToFolderWithScripts" -o $TEMP_DIR/archive.tar.gz
- tar zxvf $TEMP_DIR/archive.tar.gz -C $TEMP_DIR --strip-components 3
- bash $TEMP_DIR/run.sh
to make a curl request
to archive a folder with scripts
to unzip scripts in a temporary folder
to execute sh
ref This :: https://docs.gitlab.com/ee/api/repository_files.html#get-file-from-repository
GET /projects/:id/repository/files/:file_path/raw
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master"
it will display the file
to download this file just add >>
as below
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master" >> file.extension
As hinted by the answer above, multi project pipelines is the right approach for it.
Here's how it worked for me:
GroupX/ProjectA - contains reusable code
# .gitlab-ci.yml
stages:
- deploy
reusable_deploy_job:
stage: deploy
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline"' # run only if triggered by a pipeline
script:
- bash ./src/run.sh $UPSTREAM_CUSTOM_VARIABLE
GroupY/ProjectB - job that will reuse a code
# .gitlab-ci.yml
stages:
- deploy
deploy_job:
stage: deploy
variables:
UPSTREAM_CUSTOM_VARIABLE: CUSTOM_VARIABLE # pass this variable to downstream job
trigger: groupx/projecta
I have setup my CI so that I can manually create a release-tag when all tests succeeds for a new commit on master branch. For this I have created a manual step in the CI config like so:
.release-template:
stage:
releasing
dependencies:
- assemble
script:
- ./gradlew reckonTagPush -Preckon.scope=$scope -Preckon.stage=$stage -Dorg.ajoberstar.grgit.auth.username=$GIT_USER -Dorg.ajoberstar.grgit.auth.password=$GIT_PASSSWORD
only:
- master
when: manual #ONLY MANUAL RELEASES, ONLY FROM MASTER
release-major:
extends: .release-template
variables:
scope: major
stage: final
release-minor:
extends: .release-template
variables:
scope: minor
stage: final
release-patch:
extends: .release-template
variables:
scope: patch
stage: final
This setup fails with an authentication error.
Execution failed for task ':reckonTagPush'.
> org.eclipse.jgit.api.errors.TransportException: https://gitlab-ci-token#gitlab.com/<group>/<project>.git: not authorized
I am running this on gitlab.com on a shared runner.
The username and password are configured in gitlab ci variables for the project. When running this locally inside the same docker image that is used in the gitlab runner, it works fine. So there must be something special about the way the gitlab runner is executing the gradle tasks, or communicating with the gitlab git repo.
Solved the issue with access to pushing to the git repo by adding the following script :
script:
- url_host=`git remote get-url origin | sed -e "s/https:\/\/gitlab-ci-token:.*#//g"`
- git remote set-url origin "https://gitlab-ci-token:$GIT_TOKEN#$url_host"
- ./gradlew reckonTagPush -Preckon.scope=$scope -Preckon.stage=$stage -Dorg.ajoberstar.grgit.auth.username="$GIT_USER" -Dorg.ajoberstar.grgit.auth.password="$GIT_TOKEN"
The notably changes here are setting the git remote url, as well as surrounding the gitlab ci variables with " when passing them to the reckon plugin
I'm new to Gitlab CI.
I have configured .gitlab-ci.yml file, and using CI Lint it has passed the validation process.
Based on this documentation, I can see a specific runner should be configured on a virtual machine, a VPS, a bare-metal machine, a docker container or
even a cluster of containers.
But I can see gitlab has its own shared runners and enabled by default.
The question is how to use this shared runner?
When I visit the Pipeline page I can only see the blue Get Started with Pipeline button and when clicked I was redirected to this page.
Here's my .gitlab-ci.yml content :
before_script:
- eval $(ssh-agent -s)
stage_deploy:
only:
- testing
script:
- ssh-add <(echo "$STAGING_PRIVATE_KEY")
- ssh root#1.2.3.4 "sh update_app.sh"
It will only run the job for your testing branch, have you added the .gitlab-ci.yml file to that branch too?
Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.