Docker container does now create table on run, MariaDB - sql

I not sure what im doing wrong here but my docker container mariadb does not create a table when it start.
what am i doing wrong?
docker-compose.yml
version: '3'
services:
db:
build: .
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: checkitDB
MYSQL_USER: myuser
MYSQL_PASSWORD: mypassword
MYSQL_ROOT_PASSWORD: rootpassword
volumes:
- ./initDB:/docker-entrypoint-initdb.d
initDB.sql
CREATE TABLE checkitDB.tasks (
id VARCHAR(255) NOT NULL,
title VARCHAR(32) NOT NULL,
description TEXT NOT NULL,
PRIMARY KEY (id)
);

There can be a couple of reasons for this.
Do you have volume mapping for the database as well? The directory 'docker-entrypoint-initdb.d' has 'init' in it to indicate that the scripts are only run on database initialization. So if you already have a database, the scripts aren't run.
Another thing is that you map ./initDB but your file is called initDB.sql. If you just want to map the file, you should do
- ./initDB.sql:/docker-entrypoint-initdb.d/initDB.sql

Related

Variables in gitlab CI

I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.

How to launch 50 browser instances using Selenium Grid in the same machine

I need to launch 50 browser instances (IE) in a virtual machine and execute the same Testcase 50 times parallelly on those browsers. This is a kind of Load testing and I'm not sure if it's possible with selenium Grid concept. if not I would like to know another method to perform this task.
You can use Docker and Docker Compose, if you are familiar with it.
First you have to install docker (if you have linux or mac this should be easy, if not, then you can install it on windows (docker desktop). There are lots of tutorials on how to use docker.
After your install is finished you will need to create a folder, and inside that folder you will have to create a .yml file(you can do this with notepad++).
The file name should be: docker-compose.yml
Inside that .yml file you will have to paste this code:
version: '2'
services:
chrome:
image: selenium/node-chrome:3.14.0-gallium
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
hub:
image: selenium/hub:3.14.0-gallium
ports:
- "4444:4444"
After you have the yaml created you will need to open a git bash terminal on the path where the .yml file is located and you will need to write the following command:
docker-compose up -d
The grid will be downloaded from docker hub and it will start soon.
After 1-2 minutes you should have the grid up and running on your localhost.
You can check it by yourself on 4444 port.
And if you have the setup made for your local grid, then it should work, but you will not be able to see the tests running on the grid, because now they run in your docker container.
Now if you need more nodes, just write the following command:
docker-compose scale chrome=50
And it will create 50 chrome nodes.
However you will need to allocate a lot of resources so that container will support all that load.
If you need more info, I am happy to help!

gitlab-ci: provide environment variable(s) to custom docker image in a pipeline

I want to set up a test stage for my gitlab-ci which depends on a custom docker image. I want to know how will I provide some config (like setting env variable to providing a .env file) to it so that the custom image runs properly and hence the stage.
Current config:
test_job:
only:
refs:
- master
- merge_requests
- web
stage: test
services:
- mongo:4.0.4
- redis:5.0.1
- registry.gitlab.com/myteam/myprivaterepo:latest
variables:
- PORT=3000
- SERVER_HOST=myprivaterepo
- SERVER_PORT=9090
script: npm test
I want to provide environment variables to myprivaterepo docker image which connects to mongo:4.0.4 and redis:5.0.1 services for its functioning.
EDIT: The variables are MONGODB_URI="mongodb://mongo:27017/aics" and REDIS_CLIENT_HOST: "redis". These have no meaning for the app being tested but has meaning for the myprivaterepo image without which the test stage will fail.
I figured it out. It is as simple as adding the environment variables in the variables: part of the yaml. This is what worked for me:-
test_job:
only:
refs:
- master
- merge_requests
- web
stage: test
services:
- mongo:4.0.4
- redis:5.0.1
- name: registry.gitlab.com/myteam/myprivaterepo:latest
alias: myprivaterepo
variables:
- MYPRIVATEREPO_PORT: 9090 # Had to modify image to use this variable
- MONGODB_URI: mongodb://mongo:27017/aics
- REDIS_CLIENT_HOST: redis
- PORT: 3000 # for app being tested
- SERVER_HOST: myprivaterepo
- SERVER_PORT: 9090
script: npm test
These variables seeem to be applied to all services.
NOTE: There is a catch - you cannot use 2 images using same environment variable names.
Like, I initially used PORT=???? as environment variables in both myprivaterepo and this app being tested so an error would pop up saying EADDRINUSE. So I had to update myprivaterepo to use MYPRIVATEREPO_PORT
There is a ticket raised in Gitlab-ce, who knows when it will be implemented.

gitlab-ci.yml - variables not evaluated

My gitlab-ci.yml is configured to deploy to a staging server on push to a staging branch. Each developer has their own staging server for testing. The way I have it now doesn't seem very scalable, in that I would have to duplicate each job for each user.
I have now:
deploy_to_staging_sf:
image: debian:jessie
stage: deploy
only:
- staging_sf
tags:
- staging_sf
script:
- ./deploy.sh
deploy_to_staging_ay:
image: debian:jessie
stage: deploy
only:
- staging_ay
tags:
- staging_ay
script:
- ./deploy.sh
I was wondering if it was possible to do some kind of regex or pattern matching to keep it DRY and scalable, and I came up with this...
deploy_to_staging:
image: debian:jessie
stage: deploy
only:
- /^staging_.*$/
tags:
- $CI_COMMIT_REF_NAME
script:
- ./deploy.sh
I have the tag for the runner configured to match the branch name. However, $CI_COMMIT_REF_NAME is not evaluated for tags, and I just get the error
This job is stuck, because you don't have any active runners online
with any of these tags assigned to them: $CI_COMMIT_REF_NAME
Is this actually possible and have I just done something wrong, or is just not possible to evaluate variables here at all?
Thanks for any help.

docker-compose override application properties

Having a Spring Boot application we are using application.yml file to store properties. I got a task to give a user a possibility to override some properties while starting an application. Taking into consideration we have dockerised our app docker-compose file is the very right place I believe for that. I found one option which works actually, env_file:
backend:
build:
context: backend
dockerfile: Dockerfile.backend
restart: always
ports:
- 3000:3000
env_file:
- backend/custom.env
volumes:
- ../m2_repo:/root/.m2/
- ../{APP_NAME}/data_sources:/backend/data_sources/
links:
- database
networks:
main:
aliases:
- backend
This solves perfectly my task and all the KEY=VALUE pairs override existing in application.yml properties. However, I have 2 questions:
It appeared that having multiple services in my docker-compose file I need specify a separate env_file for each service, which is probably not very convenient. Is there a possibility to have one common env_file for the whole docker-compose file?
I know that for docker-compose run command there is an option -e where i can put key=value pairs of env variables. Is there any similar option for docker-compose up? I mean in order not to use env_file at all.
Ad 1: It is not possible. I also believe it is intentional - to make the developer define what container has access to what .env data.
Ad 2: No, you cannot supply the variables using a runtime parameter of up command of docker-compose (run docker-compose help up to see the available runtime params). But you can define these using environment clause from within a compose file, like:
restart: always
ports:
- 3000:3000
env_file:
- backend/custom.env
environment:
- DB_PASSWORD # <= #1
- APP_ENV=production # <= #2
ie.
either just a name of the env var - its value is then taken from the host machine
or the whole definition to create a new one to be available within a container
See docs on environment clause for more clarification.
Another thing you can do in order to override some settings is to extend the compose file using a "parent" one. Docs on extends clause
Unfortunately as of now, extends won't work when using compose file of version 3, but it is being discussed in this github issue, so hopefully it will be available soon:)