docker-compose override application properties - variables

Having a Spring Boot application we are using application.yml file to store properties. I got a task to give a user a possibility to override some properties while starting an application. Taking into consideration we have dockerised our app docker-compose file is the very right place I believe for that. I found one option which works actually, env_file:
backend:
build:
context: backend
dockerfile: Dockerfile.backend
restart: always
ports:
- 3000:3000
env_file:
- backend/custom.env
volumes:
- ../m2_repo:/root/.m2/
- ../{APP_NAME}/data_sources:/backend/data_sources/
links:
- database
networks:
main:
aliases:
- backend
This solves perfectly my task and all the KEY=VALUE pairs override existing in application.yml properties. However, I have 2 questions:
It appeared that having multiple services in my docker-compose file I need specify a separate env_file for each service, which is probably not very convenient. Is there a possibility to have one common env_file for the whole docker-compose file?
I know that for docker-compose run command there is an option -e where i can put key=value pairs of env variables. Is there any similar option for docker-compose up? I mean in order not to use env_file at all.

Ad 1: It is not possible. I also believe it is intentional - to make the developer define what container has access to what .env data.
Ad 2: No, you cannot supply the variables using a runtime parameter of up command of docker-compose (run docker-compose help up to see the available runtime params). But you can define these using environment clause from within a compose file, like:
restart: always
ports:
- 3000:3000
env_file:
- backend/custom.env
environment:
- DB_PASSWORD # <= #1
- APP_ENV=production # <= #2
ie.
either just a name of the env var - its value is then taken from the host machine
or the whole definition to create a new one to be available within a container
See docs on environment clause for more clarification.
Another thing you can do in order to override some settings is to extend the compose file using a "parent" one. Docs on extends clause
Unfortunately as of now, extends won't work when using compose file of version 3, but it is being discussed in this github issue, so hopefully it will be available soon:)

Related

Pass services to child pipeline in GitLab

I am trying to generalize the cicd of our GitLab projects.
I am planning to create a cicd-templates repo, containing general jobs that I run in multiple projects.
I have for example a terraform template that accepts input variables and runs an init, validate, plan and apply job.
I am now trying to create a similar template for our python-nox sessions. The issue is that, for our integration tests, we need two services.
I would prefer not to include the services in the template, since they are not needed for the integration tests of other projects (but other services might).
So I was wondering how I could include a ci template (from another project) and pass the needed images from the parent pipeline.
What is not working:
Parent/project pipeline:
trigger-nox-template:
variables:
IMAGE: "registry.gitlab.com/path/to/my/image:latest"
trigger:
include:
- project: cicd-templates
file: /nox_tests.yml
strategy: depend
services:
- name: mcr.microsoft.com/mssql/server:2017-latest
alias: db
- name: mcr.microsoft.com/azure-storage/azurite:3.13.1
alias: storage
cicd-templates/nox_tests.yml:
variables:
IMAGE: "registry.gitlab.com/path/to/a/default/image:latest"
integration:
image: '$IMAGE'
script:
- python -m nox -s integration
As I said, I could hardcode the services in the template as well, but they might vary based on the parent pipeline, so I'm looking for a more dynamic solution.
ps. How I implemented the image does work, but if there is a more elegant way, that would be appreciated as well.
Thanks in advance!

Passing variables between Tekton Steps

I am trying to implement a basic Tekton CI pipeline. All the pipeline does is 1) get the source code 2) build an image with a new version and push it to an image registry.
The image version is generated by a Tekton Step. Images are built by another Tekton step that uses Kaniko as described here.
I am aware of using workspaces to pass variables between Tekton steps. Meaning I can write the version to a file in the workspace. But cant figure out a way to read this version from the file in the Kaniko build step below:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:latest
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url):<IMAGE-VERSION-NEEDED-HERE>
- --context=$(params.pathToContext)
- --build-arg=BASE=alpine:3
There should be a common pattern to resolve this but I am not sure if I am looking at the right places in Tekton documentation for this.
Can anyone offer some pointers?
This is to confirm that I managed to resolve the issue by redesigning the steps to tasks as suggested by #Jonas.
Tekton Tasks can have outputs which can be referred in other tasks. At the time of writing this Tekton steps don't seem to have this feature.
For more details refer the links in #Jonas comments above.
All steps in a Task share the same Pod and thus as access to a shared workspace implemented as an emptyDir volume:
Volumes:
tekton-internal-workspace:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
A common way to share data between steps is to a file in the /workspace and read it in the next step.
Alternatively, as suggested by #Jonas, if you use different Tasks you can write a result in the first Task and feed it into a parameter of the second Task in the Pipeline definition.
Using results this way implicitly creates a dependency between the two Tasks, so the Tekton controller won't schedule the second Task until the first one has successfully completed and results are available.
You can use the gcr.io/kaniko-project/executor:debug image that has shell at /busybox/sh.
And create something like this (pass kaniko commands via script):
steps:
- name: write-to-workspace
image: ubuntu
script: |
#!/usr/bin/env bash
echo "IMAGE_VERSION" > /workspace/FOO
- name: read-from-workspace
image: gcr.io/kaniko-project/executor:debug
script: |
#!/busybox/sh
export IMAGE_VERSION=$(cat /workspace/FOO)
echo "$IMAGE_VERSION"
/kaniko/executor \
--dockerfile=$(params.pathToDockerFile) \
--destination=$(resources.outputs.builtImage.url):"${IMAGE_VERSION}" \
--context=$(params.pathToContext) \
--build-arg=BASE=alpine:3
You can refer to this discussion: https://github.com/tektoncd/pipeline/issues/1476

Variables in gitlab CI

I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.

gitlab-ci: provide environment variable(s) to custom docker image in a pipeline

I want to set up a test stage for my gitlab-ci which depends on a custom docker image. I want to know how will I provide some config (like setting env variable to providing a .env file) to it so that the custom image runs properly and hence the stage.
Current config:
test_job:
only:
refs:
- master
- merge_requests
- web
stage: test
services:
- mongo:4.0.4
- redis:5.0.1
- registry.gitlab.com/myteam/myprivaterepo:latest
variables:
- PORT=3000
- SERVER_HOST=myprivaterepo
- SERVER_PORT=9090
script: npm test
I want to provide environment variables to myprivaterepo docker image which connects to mongo:4.0.4 and redis:5.0.1 services for its functioning.
EDIT: The variables are MONGODB_URI="mongodb://mongo:27017/aics" and REDIS_CLIENT_HOST: "redis". These have no meaning for the app being tested but has meaning for the myprivaterepo image without which the test stage will fail.
I figured it out. It is as simple as adding the environment variables in the variables: part of the yaml. This is what worked for me:-
test_job:
only:
refs:
- master
- merge_requests
- web
stage: test
services:
- mongo:4.0.4
- redis:5.0.1
- name: registry.gitlab.com/myteam/myprivaterepo:latest
alias: myprivaterepo
variables:
- MYPRIVATEREPO_PORT: 9090 # Had to modify image to use this variable
- MONGODB_URI: mongodb://mongo:27017/aics
- REDIS_CLIENT_HOST: redis
- PORT: 3000 # for app being tested
- SERVER_HOST: myprivaterepo
- SERVER_PORT: 9090
script: npm test
These variables seeem to be applied to all services.
NOTE: There is a catch - you cannot use 2 images using same environment variable names.
Like, I initially used PORT=???? as environment variables in both myprivaterepo and this app being tested so an error would pop up saying EADDRINUSE. So I had to update myprivaterepo to use MYPRIVATEREPO_PORT
There is a ticket raised in Gitlab-ce, who knows when it will be implemented.

Best practice using Dockerfile with docker-compose and vcs

I have a question regarding best practice.
Lets say I have a webfrontend developed in angularjs and a api to get data from. I put those two in separate repositories.
Now I want to dockerize the whole thing. My idea was to put a Dockerfile in each project which specifies their environment. Until now its fine but what when I also have a docker-compose file which starts those two services simultanously? First of in which repo should I put this file and secondly how can I assure, that the images are always uptodate?
You can use a structure like this:
project/
- project/docker-compose.yaml
- project/frontend/ (contains Dockerfile + necessary files)
- project/api/ (contains Dockerfile + necessary files)
In your docker-compose.yaml you can write something like this for each image:
frontend:
build: ./frontend #folder in which Dockerfile of frontend is saved (to build the image)
image: my-frontend:1.0 #give a name to your image
container_name: frontend-container #containername when a container instance of your image is running
To start docker-compose you can run docker-compose up --build. The --build tag will recreate your images when there are changes so you can keep your compose up to date when there are changes in the dockerfile (image).
Docker-compose is 'reading' from up to bottom. So the service you describe first in your docker-compose.yaml will be created first. I would think it's your API because that can probably exist on its own and your frontend needs to connect with it. Sometimes your first service is started too slow, which means the second service is already up but can not find the first service (to connect with) and it crashes. This can be solved by using a wait-for-it.sh script. Your second service will use the script and check when the first service is up. When it's up, it will start its own service.