How to use Docker at GitLab stage if I already use another image? - gitlab-ci

This is a working code when I only need Docker:
build_stage:
stage: build
image: docker:20.10.16 # This is mandatory to use Docker
services:
- docker:20.10.16-dind
script:
- docker push $MY_IMAGE # This code works
On another stage, I need to use another image:
deploy_stage:
stage: deploy
image: google/cloud-sdk:latest # Another image, so I can not specify here "docker"
services:
- docker:20.10.16-dind # This line is useless because there is no "docker" image
script:
- docker pull $MY_IMAGE # This code not working because there is no Docker
I use another image for the stage, but I also need Docker.
How can I use Docker in my CI stage?

I've found that I do not need to use image: docker to use Docker, but I must to specify variables then:
deploy_stage:
stage: deploy
image: google/cloud-sdk:latest # Note that I do not use Docker. I use another image.
services:
- docker:20.10.16-dind
variables:
# This is mandatory if you did not use `image: docker`
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
script:
- docker pull $MY_IMAGE # This works
About variables you may read at official documentation.

Related

. gitlab-ci. yml pipeline run only on one branch

i have . gitlab-ci. yml file. when i push to stage branch it make stage commands (only stage) but when i merge to main it's still make "only stage" command
what i am missing ??
variables:
DOCKER_REGISTRY: 036470204880.dkr.ecr.us-east-1.amazonaws.com
AWS_DEFAULT_REGION: us-east-1
APP_NAME: apiv6
APP_NAME_STAGE: apiv6-test
DOCKER_HOST: tcp://docker:2375
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
- aws ecs update-service --cluster apiv6 --service apiv6 --force-new-deployment
only:
- main
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME_STAGE:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME_STAGE:latest
- aws ecs update-service --cluster apiv6-test --service apiv6-test-service --force-new-deployment
only:
- stage
Itamar, I believe this is a YAML limitation. See this GitLab issue as reference.
The problem is that you have two jobs with the same name. But when the YAML file is parsed, you're actually overriding the first job.
Also, from the official GitLab documentation:
Use unique names for your jobs. If multiple jobs have the same name, only one is added to the pipeline, and itโ€™s difficult to predict which one is chosen
Please, try renaming one of your jobs and test it again.

Drone CI - How to set pipeline env var to result of CLI output

I recognize that within a pipeline step I can run a simple export, like:
commands:
- export MY_ENV_VAR=$(my-command)
...but if I want to use this env var throughout the whole pipeline, is it possible to do something like this:
environment:
MY_ENV_VAR: $(my-command)
When I do this, I get yaml: unmarshal errors: line 23: cannot unmarshal !!seq into map[string]*yaml.Variable which suggests this isn't possible. My end goal is to write a drone plugin that accepts the output of $(...) as one if it's settings. I'd prefer to have the drone plugin not run the command, but just use the output.
I've also attempted to use step dependencies to export an env var, however it's state doesn't carry over between steps:
- name: export
image: bash
commands:
- export MY_VAR=$(my-command)
- name: echo
image: bash
depends_on:
- export
commands:
- echo $MY_VAR // empty
Writing the command output to a script file might be a better way to do what you want, since filesystem changes are persisted between individual steps.
---
kind: pipeline
type: docker
steps:
- name: generate-script
image: bash
commands:
# - my-command > plugin-script.sh
- printf "echo Fetching Google;\n\ncurl -I https://google.com/" > plugin-script.sh
- name: test-script-1
image: curlimages/curl
commands:
- sh plugin-script.sh
- name: test-script-2
image: curlimages/curl
commands:
- sh plugin-script.sh
From Drone's Docker pipeline documentation:
Workspace
Drone automatically creates a temporary volume, known as your workspace, where it clones your repository. The workspace is the current working directory for each step in your pipeline.
Because the workspace is a volume, filesystem changes are persisted between pipeline steps. In other words, individual steps can communicate and share state using the filesystem.
โš  Workspace volumes are ephemeral. They are created when the pipeline starts and destroyed after the pipeline completes.
if cant execute command in environment period.
maybe you can define a "command string" in "environment" block, like:
environment:
MY_ENV_VAR: 'echo "this is command to execute"' # note the single quote
then in commands block,
commands:
- eval $MY_ENV_VAR
worth a try

Use enviroment variables in docker-compose.yml file in VueJS application

Please tell me how you can pass environment variables to the VUE application from the docker-compose.yml file. For some reason, after the yarn build command in .gitlab-ci.yml, the application sees only env variables that are written in the "env.production" file
My docker-compose.yml
version: "3.7"
services:
develop_dashboard_frontend:
image: some_image:latest
container_name: develop_dashboard_frontend
environment:
VUE_APP_API_URL: "some_api_URL"
ports:
- "127.0.0.1:8016:80"
restart: always
Any ideas?
You will need to put dashes (-) before each environment variable you want to specify, like you did it with ports in your example.
Refer to: https://docs.docker.com/compose/environment-variables/
$ cat docker-compose.yml
version: '3'
services:
api:
image: 'some_image:tag'
environment:
- VARIABLE_NAME=variable_value
You also need to distinguish between build time and runtime environment variables.
You can supply environment variables for your build, but that might not be saved for the runtime. It really depends on your build (I'm not familiar with yarn build).
However, I recommend using supplying the env variables for run time.
Just define them in the yaml as you tried.
Using $ docker stack deploy or docker-compose up it should work.

How to set .gitlab-ci.yml to only run the tasks on one node and only update or push the repo to other nodes (docker-swarm)?

This is my .gitlab-ci.yml file in my repo:
image: docker
#services:
# - docker:dind
stages:
- build
- deploy
build-prod:
stage: build
only:
- master
tags:
- docker
script:
- docker network create -d overlay reprox
environment: master
deploy-prod:
stage: deploy
only:
- master
tags:
- docker
script:
- docker stack deploy -c ./site1/docker-compose.yml site1
- docker stack deploy -c ./site2/docker-compose.yml site2
- docker stack deploy -c ./site3/docker-compose.yml site3
- docker stack deploy -c ./reverse-proxy/docker-compose.yml proxy
environment: master
So my setup is 1 manager and 2 worker nodes and I only need to run build and deploy jobs on manager node, other nodes just need to have the repo, no need for running the bash commands on worker nodes.
I added a manager runner with "docker" tag and worker nodes with "runner" tag.
Remove your docker tag. You can configure your workers to work only on specific tags
job1:
tags:
- dockernode_1
job2:
tags:
- dockernode_2
Your previously used docker tag was probably just a workaround (or from a tutorial) to make the runners work on all jobs. If you don't want a runner to care about tagging, you can make him pick up all available jobs.

How to publish docker images to docker hub from gitlab-ci

Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. ๐Ÿ™Œ
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.