Gitlab CI/CD: missing /usr/local/bin/gitlab-runner. Uploading artifacts is disabled - gitlab-ci

So I have a quite simple gitlab-ci.yml script:
test stage:
stage: build
artifacts:
paths:
- result/
script:
…
So the problem is when it gets to the “Uploading artifacts for successful job”, it prints “Missing /usr/local/bin/gitlab-runner. Uploading artifacts is disabled”.
Tried to change an owner and a group of the gitlab-runner file to “gitlab-runner”, even gave 777 rights, but nothing helped.
Any ideas where I’m wrong?

If you are using gitlab latest version, it will be installed in /usr/bin/gitlab-runner, but you are trying to use /usr/local/bin/gitlab-runner.

Related

GitLab CI/CD Could I get artifacts real path in runner then send files with scp?

I'm learning GitLab CI/CD, I want to when finished build send files in artifacts, the idea is possible?
image: maven:3.8.1-jdk-11
stages:
- build
- deploy
build:
stage: build
script:
- mvn clean install
artifacts:
paths:
- "*/target/*.jar"
deploy:
stage: deploy
script:
- scp -r <artifacts_path> root#test.com:~/Deploy
Could I get artifacts real path in runner then send files with scp?
Generally speaking, no. You must rely on artifact restoration process. Keep in mind that (1) artifacts are generally not stored on the runner and (2) docker runners execute jobs inside of a docker container and typically would not have access to files on the runner host, even if artifacts were stored there.
When jobs start, artifacts from previous stages are restored into the workspace.
So, as an alternative solution, you can simply start with an empty workspace (don't checkout the repo), then upload all files in the workspace, which should be only the restored artifacts, assuming there are no file-based variables.
deploy:
variables: # prevent checkout of repository
GIT_STRATEGY: none
stage: deploy
script:
- ls -laht # list files, which should be just restored artifacts
- scp -r ./ root#test.com:~/Deploy
Another way might be to just use the same glob pattern used in the artifacts:paths: to find the files and upload them.
variables:
ARTIFACTS_PATTERN: "*/target/*.jar"
build:
# ...
artifacts:
paths:
- $ARTIFACTS_PATTERN
deploy:
script: # something like this. Not sure if scp supports glob patterns
- rsync -a -m --include="$ARTIFACTS_PATTERN" user#remote:~/Deploy

how to config lfs.fetchinclude in gitlabci

I want to git lfs fetch only in some dir in the gitlab CI. but failed
the gitlab-runner was 11.8.0~beta.1077
i config like this:
variables:
# Please edit to your GitLab project
GIT_STRATEGY: clone
GIT_CHECKOUT: "false"
script:
- git config lfs.fetchinclude "xxx/xxx/, test/"
but ci erro:
root config contains unknown keys: script
how to fix it?
The first part is your syntax error - the script key must be part of a job, e.g. like this:
build:
stage: build
script:
- git config ...
However, I think the CI runner will fetch LFS files automatically, and the script is only run after cloning.
So I think you have to disable automatic fetching before, & then this might work:
build:
stage: build
before_script:
- git config lfs.fetchinclude "xxx/xxx/, test/"
- git lfs pull

Packer build from a Gitlab pipeline

I am trying to execute my packer build into a Gitlab pipeline, i didn't find examples on the internet but i have seen there is a docker image, so i was hoping this yaml would do the job:
image: hashicorp/packer
stages:
- build
build:
stage: build
script:
- echo "Hello world"
- packer build ./definition.json
only:
- master
But i don't understand the behavior, the CI pull the image, clone the repo, then it ends up like this:
Skipping Git submodules setup
Usage: packer [--version] [--help] <command> [<args>]
Available commands are:
build build image(s) from template
console creates a console for testing variable interpolation
fix fixes templates from old versions of packer
inspect see components of a template
validate check that a template is valid
version Prints the Packer version
Usage: packer [--version] [--help] <command> [<args>]
Available commands are:
build build image(s) from template
console creates a console for testing variable interpolation
fix fixes templates from old versions of packer
inspect see components of a template
validate check that a template is valid
version Prints the Packer version
ERROR: Job failed: exit code 127
It doesn't even print my echo Hello World, and it prints 2 times how i should iterract with the CLI, why this behavior?
I found how to fix it, i had to change:
image: hashicorp/packer
into:
image:
name: hashicorp/packer
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

Gitlab-CI cannot clone

I have a very basic integration configured for Gitlab-CI but it fails almost at the beginning when it has to clone the code.
My integration is this:
image: node:latest
stages:
- build
- test
cache:
paths:
- node_modules/
- dist/
build-prod:
stage: build
script:
- npm install
- npm run build-prod
artifacts:
paths:
- node_modules/
- dist/
test_with_karma:
stage: test
script: ng test
And the error that I get is this:
Running with gitlab-runner 11.7.0 (8bb608ff)
on fakehost 2eaf11ea
Using Docker executor with image node:latest ...
Pulling docker image node:latest ...
Using docker image sha256:8c67bfd7b95bdc535edc4a4144f5392b0f73efd6385fbcb47747d028d7059359 for node:latest ...
Running on runner-2eaf11ea-project-56-concurrent-0 via fakehost...
Cloning repository...
Cloning into '/builds/redacted/frontend'...
remote: You are not allowed to download code from this project.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#working-domain.com/redacted/frontend.git/': The requested URL returned error: 403
/bin/bash: line 65: cd: /builds/redacted/frontend: No such file or directory
ERROR: Job failed: exit code 1
What is the problem here?
Check if this is covered by gitlab-org/gitlab-ce issue 39469
YAY - it works for me. This problem seems to have multiple solutions.
The one that worked for me is #44855
To summarize. Being an Administrator on Gitlab does not mean you have the "access" to do whatever you want to do in Gitlab.
"Unable to access" permissions applies to the person who is logged into Gitlab and running the job.
To fix the problem - the person / account running the job must be a member (master) of the project.
This will apply to private projects.
It is not necessary to make a private project Public even though that appears to fix the problem. GITLAB suggests you must have https for the project to work you can use http.
SOLUTION - add your account to the project even if you are the Administrator
And:
Conrad has described it correctly.
You need to have rights to the project to run pipeline, however, as administrator, you can start any pipeline.
I've got the case when the user being Admin in Gitlab could push his commit from command line, although theoretically having no rights to project - and the pipeline has failed.
This inconsistency need to be fixed, either Admin user should not be able to push/start pipeline, having no rights for it, or he should authomatically be granted all rights to all projects. I'd prefer the first one, because it separates gitlab administration from project rights. Sometimes I prefer not having full rights, just like working as non-root under Linux.

How to publish docker images to docker hub from gitlab-ci

Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.