gitlab CI docker stalls after 2min
slight code change and same bug across different repos. Ci/CD used to work on these repos?
...
Step 4/11 : RUN yarn install
---> Using cache
---> c75197e0dbaa
Step 5/11 : COPY babel.config.js babel.config.js
---> Using cache
---> 482b1ff64322
Step 6/11 : COPY src src
---> d530de056b88
Step 7/11 : RUN mkdir locales
---> Running in f7885ea9a3c3
...
Link to the GL issue
https://gitlab.com/gitlab-com/support-forum/issues/5202
This is a known issue on gitlab. Change dind service as follows. That should fix your problem.
services:
- docker:19.03.5-dind
Related
Currently I'm trying to build container serving VueJS application via Cloud Native Buildpacks.
I already have working Docker file that builds VueJS in production mode and then copy results to nginx image, but I would like to try to use CNB.
So I just have created empty VueJS project for test via vue create vue-tutorial and trying to do with CNB somehting like described there https://cli.vuejs.org/guide/deployment.html#heroku but using CNB.
Does anyone know working recipe how to do that with CNB?
P.S. Currently I'm trying to build that with
pack build spa --path . \ SIGINT(2) ↵ 17:22:41
--buildpack gcr.io/paketo-buildpacks/nodejs \
--buildpack gcr.io/paketo-buildpacks/nginx
but getting next error (and I'm not sure that I'm on right way):
===> DETECTING
ERROR: No buildpack groups passed detection.
ERROR: Please check that you are running against the correct path.
ERROR: failed to detect: no buildpacks participating
ERROR: failed to build: executing lifecycle: failed with status code: 100
UPD
My current dockerfile
# build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:1.19-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
We chatted about this in Slack, but I wanted to capture it here too:
pack build --buildpack heroku/nodejs --buildpack https://cnb-shim.herokuapp.com/v1/heroku-community/static yourimage
This command may do what you want. The static buildpack used in that example is not yet converted to a cloud native buildpack, but the shim may allow you to build a workable artifact. Then run your image with something like docker run -it -e PORT=5000 -p 5000:5000 yourimagename
I have a very basic integration configured for Gitlab-CI but it fails almost at the beginning when it has to clone the code.
My integration is this:
image: node:latest
stages:
- build
- test
cache:
paths:
- node_modules/
- dist/
build-prod:
stage: build
script:
- npm install
- npm run build-prod
artifacts:
paths:
- node_modules/
- dist/
test_with_karma:
stage: test
script: ng test
And the error that I get is this:
Running with gitlab-runner 11.7.0 (8bb608ff)
on fakehost 2eaf11ea
Using Docker executor with image node:latest ...
Pulling docker image node:latest ...
Using docker image sha256:8c67bfd7b95bdc535edc4a4144f5392b0f73efd6385fbcb47747d028d7059359 for node:latest ...
Running on runner-2eaf11ea-project-56-concurrent-0 via fakehost...
Cloning repository...
Cloning into '/builds/redacted/frontend'...
remote: You are not allowed to download code from this project.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#working-domain.com/redacted/frontend.git/': The requested URL returned error: 403
/bin/bash: line 65: cd: /builds/redacted/frontend: No such file or directory
ERROR: Job failed: exit code 1
What is the problem here?
Check if this is covered by gitlab-org/gitlab-ce issue 39469
YAY - it works for me. This problem seems to have multiple solutions.
The one that worked for me is #44855
To summarize. Being an Administrator on Gitlab does not mean you have the "access" to do whatever you want to do in Gitlab.
"Unable to access" permissions applies to the person who is logged into Gitlab and running the job.
To fix the problem - the person / account running the job must be a member (master) of the project.
This will apply to private projects.
It is not necessary to make a private project Public even though that appears to fix the problem. GITLAB suggests you must have https for the project to work you can use http.
SOLUTION - add your account to the project even if you are the Administrator
And:
Conrad has described it correctly.
You need to have rights to the project to run pipeline, however, as administrator, you can start any pipeline.
I've got the case when the user being Admin in Gitlab could push his commit from command line, although theoretically having no rights to project - and the pipeline has failed.
This inconsistency need to be fixed, either Admin user should not be able to push/start pipeline, having no rights for it, or he should authomatically be granted all rights to all projects. I'd prefer the first one, because it separates gitlab administration from project rights. Sometimes I prefer not having full rights, just like working as non-root under Linux.
Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.
I am running “dev mode” by leveraging pre-generated orderer and channel artifacts for a sample dev network
here cli require image: hyperledger/fabric-tools by default it is trying to pull latest tag image and showing errorlatest image. and it throwing error
Error response from daemon: manifest for hyperledger/fabric-tools:latest not found
so I pull image hyperledger/fabric-tools:x86_64-1.0.0, and rename it with hyperledger/fabric-tools:latest( not sure it is proper way or not ) by :
docker pull hyperledger/fabric-tools:x86_64-1.0.0
docker tag hyperledger/fabric-tools:x86_64-1.0.0 hyperledger/fabric-tools
My network is running successfully but unfortunately cli container is stopped running.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d10d170cd2fa hyperledger/fabric-tools:x86_64-1.0.0 "/bin/bash -c ./sc..." 29 seconds ago Exited (1) 27 seconds ago cli
163f494bb85f hyperledger/fabric-ccenv "/bin/bash -c 'sle..." 59 minutes ago Up About a minute chaincode
e96e86930d94 hyperledger/fabric-peer "peer node start -..." 59 minutes ago Up About a minute 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer
c568480e30d2 hyperledger/fabric-orderer "orderer" 59 minutes ago Up About a minute 0.0.0.0:7050->7050/tcp
You can use the tools container as the cli container.
docker exec -it d10d170cd2fa /bin/bash
Can you post logs of cli container by issuing command docker logs <containerId>? cli container exit doesn't necessarily mean there's any error about the e2e test.
If you started the services using docker-compose, you can run either of: docker-compose restart -f docker-compose-simple.yaml cli or docker-compose up -f docker-compose-simple.yaml cli.
However, if you started your network AFTER having tagged the fabric-tools image as above, you should examine the logs of your exited container with docker logs cli, to determine why it exited.
It can be because of previously running docker containers. In my case first time it worked correctly but it gave error in second time. Killing and removing created docker containers using
docker rm container_name
and starting containers again, solved the problem.
I am trying to build a docker container of an asp.net code application and i get errors while trying to retrieve nuget packages
docker build -t my:container .
Sending build context to Docker daemon 9.58 MB
Step 1 : FROM microsoft/dotnet:latest
---> 3693707d4f7f
Step 2 : COPY . /app
---> Using cache
---> 22a461236738
Step 3 : WORKDIR /app
---> Using cache
---> 8bea2af489ad
Step 4 : RUN dotnet restore
---> Running in 5fbfe078c820
log : Restoring packages for /app/project.json...
error: Unable to load the service index for source https://api.nuget.org/v3/index.json.
error: An error occurred while sending the request.
error: Peer certificate cannot be authenticated with given CA certificates
The command 'dotnet restore' returned a non-zero code: 1
The dockerfile i am using is a pretty standard one and based on microsoft/dotnet:latest container.
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
EXPOSE 9881/tcp
ENV ASPNETCORE_URLS http://*:9881
ENTRYPOINT ["dotnet", "run"]
This used to work a while ago, something seems to have broken but i have no idea what would that be.
The problem was that i was been using an experimental version 1.12.3 of docker.
Everything works great now that i installed the latest official version on windows (still 1.12.3 but not marked as experimental).