how to deploy same job on all my runners? - gitlab-ci

I have several VMs running gilab-runner,and I'm using gitlab-ci to deploy microservices into those VMs. Now I want to monitoring those VMs with prometheus and grafana, but i need to setup node-exporter/cadvisor etc. service into those VMs.
My Ideas is using gitlab-ci to define a common job for those VMs.
I have already write the docker-compose.yml and .gitlab-ci.yml.
version: '3.8'
services:
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
ports:
- "9100:9100"
cadvisor:
image: google/cadvisor
container_name: cadvisor
restart: unless-stopped
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
ports:
- "8080:8080"
deploy-workers:
tags:
- worker
stage: deploy-workers
script:
- docker-compose -f docker-compose.worker.yaml pull
- docker-compose -f docker-compose.worker.yaml down
- docker-compose -f docker-compose.worker.yaml up -d
then I register the runner in all my VMs with 'worker' tag.
However, only one worker job is triggered during ci.
I have about 20 VMs to go.
Do anyone have suggestions?

This is probably not a good way to be deploying your services onto virtual machines. You don't want to just launch your GitLab CI job and then hope that it results in what you want. Managing each VM separately is both going to be tedious and error-prone.
You probably want to do is have a method that has a declarative way to define/describe your infrastructure and the state of how that infrastructure should be configured and the applications running on it.
For example, you could:
Use a proper orchestrator, such as docker swarm or Kubernetes AND/OR
Use a provisioning tool, such as Ansible connected to each VM, or if your VMs run in the cloud, Terraform or similar.
In both these examples, you can leverage these tools from a single GitLab CI job and deploy changes to all of your VMs/clusters at once.
Using docker swarm
For example, instead of running your docker-compose on 20 hosts, you can join all 20 VMs to the same docker swarm.
Then in your compose file, you create a deploy key specifying how many replicas you want across the swarm, including numbers per node. Or use mode: global to simply specify you want one container of the service per host in your cluster.
services:
node-exporter:
deploy:
mode: global # deploy exactly one container per node in the swarm
# ...
cadvisor:
deploy:
mode: global # deploy exactly one container per node in the swarm
Then running docker stack deploy from any manager node will do the right thing to all your swarm worker nodes. Docker swarm will also automatically restart your containers if they fail.
See deploy reference.
Using swarm (or any orchestrator) has a lot of other benefits, too, like health checking, rollbacks, etc. that will make your deployment process a lot safer and more maintainable.
If you must use a job per host
Set a unique tag for each runner on each VM. Then use a parallel matrix with a job set to each tag.
job:
parallel:
matrix:
RUNNER: [vm1, vm2, vm3, vm4, vm5] # etc.
tags:
- $RUNNER
See run a matrix of parallel jobs
You want to make sure the tag is unique and covers all your hosts, or you may run the same job on the same host multiple times.
This will let you do what you were seeking to do. However, it's not an advisable practice. As a simple example: there's no guarantee that your docker-compose up will succeed and you may just take down your entire cluster all at once.

Related

Is there a better way than to loop through a list of Proxmox VMs and start/stop them up through Ansible?

I am following this guide to learn about how to use proxmox, ansible and terraform together for automation.
I have the below Ansible playbook that can be used to start or stop a kubernetes cluster. The cluster consists of one kuebrnetes control-plane node and three worker nodes.
kubernetes-power-mgmt.yaml
---
- hosts: pmox_node
become: yes
tasks:
- name: "Changing state to: {{state}}"
community.general.proxmox_kvm:
api_host: 192.168.2.220
api_user: ansible_sa#pam
api_token_id: ansible_sa_token_id
api_token_secret: "token_secret"
name: "{{item}}"
state: "{{state}}"
timeout: 90
loop:
- control-plane.k8s.cluster
- worker-1.k8s.cluster
- worker-2.k8s.cluster
- worker-3.k8s.cluster
inventory.yaml
all:
children:
pmox_node:
hosts:
skillstech:
ansible_host: 192.168.2.220
ansible_user: root
I am using the below command to start the VMs on my proxmox server (192.168.2.220)
ansible-playbook --extra-vars="state=started" -i inventory.yaml kubernetes-power-mgmt.yaml
Command to stop the VMs
ansible-playbook --extra-vars="state=stopped" -i inventory.yaml kubernetes-power-mgmt.yaml
I have two concerns:
When using the loop in the playbook to cycle through the worker nodes, it initiates a ssh connection to the proxmox server each time it runs through the loop. Is this OK to do, if not is there a way to have only one ssh connection made and then shutdown, startup can then happen in a loop on the proxmox server?.
Can this setup be improved by refactoring the playbook and inventory files?. For example make it more idiomatic in the context of Ansible.

Can I have Redis available in my DDEV container?

I use DDEV as a development environment for a TYPO3 project. I want to have Redis server available (for cache).
How can I achieve that?
In order to have Redis available for TYPO3 you need to have:
Redis server
To create redis server for your project, just create a file
.ddev/docker-compose.redis.yaml with following
content:
# ddev redis recipe file
#
version: '3.6'
services:
redis:
container_name: ddev-${DDEV_SITENAME}-redis
image: redis:4
restart: always
ports:
- 6379
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=6379
volumes: []
web:
links:
- redis:$DDEV_HOSTNAME
Configure your application to use Redis
Use redis as a host, and port 6379.
FYI! DDEV added PHP-Redis to the web container, as of DDEV v1.1.0 on 15 Aug.
https://www.drud.com/ddev-local/ddev-v1-1-0/
"More services! We’ve added PHP-Redis to the web container. We heard repeatedly that not having Redis was a major hurdle for people who wanted to use DDEV. We hope this helps!"
You can get redis with ddev get drud/ddev-redis. There's also ddev get drud/ddev-redis-commander for use with the ddev redis service.
https://ddev.readthedocs.io/en/latest/users/extend/additional-services/

how to use redis cluster in the gitlab ci and write .gitlab-ci.yml

My Spring boot use the redis cluster, but in gitlab
ci ,I only find redis . how can I make the ci with redis cluster?
List item
.gitlab-ci.yml
services:
- grokzen/redis-cluster
application.yml
spring:
redis:
cluster:
nodes:
- grokzen__redis-cluster:7000
- grokzen__redis-cluster:7001
it worked.

How do I run two docker containers in the same backend that can accept connections on multiple ports?

I need to run two identical containers behind Traefik which have to accept requests coming in on multiple ports. To do this I am using docker service labels. The problem that I am running into is when I use Docker service labels and try to scale up to two containers I get an error message about the backend already being defined.
Using the normal labels (traefik.frontend, traefik.port etc.) works fine, but adding the extra labels (traefik.whoami.frontend, traefik.whoami.port etc.) seems to break things.
Docker compose file:
version: '2'
services:
whoami:
image: emilevauge/whoami
networks:
- web
labels:
- "traefik.http.frontend.rule=Host:whoami.docker.localhost"
- "traefik.http.port=80"
- "traefik.http.frontend.entryPoints=http"
- "traefik.http.frontend.backend=whoami"
- "traefik.soap.frontend.rule=Host:whoami.docker.localhost"
- "traefik.soap.port=8443"
- "traefik.soap.frontend.entryPoints=soap"
- "traefik.soap.frontend.backend=whoami"
networks:
web:
external:
name: traefik_webgateway
Scale up:
$ docker-compose scale whoami=2
Creating and starting whoami_whoami_2 ... done
Traefik error log:
proxy_1 | time="2017-10-23T15:37:16Z" level=error msg="Near line 39 (last key parsed 'backends.backend-whoami.servers'): Key 'backends.backend-whoami.servers.service' has already been defined."
Can anyone tell me what I'm doing wrong here or if there is another way to map two ports to a container?
Thanks!
There was a bug with Docker replicas management.
A fix will be merged in the next release : https://github.com/containous/traefik/pull/2314.

Share a data volume between two docker containers

I use Docker version 1.13.1,build 092cba3 on Windows 10.
I have a custom Jenkins container that builds code from Github in a volume.
The volume is /var/jenkins_home/workspace/myjob .
I also have an Apache container that I want to share the volume with.
The docker-compose.yml file is:
version: '2'
services:
jenkins:
container_name: jenkins
image: jenkins:v1
environment:
JAVA_OPTS: "-Djava.awt.headless=true"
JAVA_OPTS: "-Djenkins.install.runSetupWizard=false" # Start jenkins unlocked
ports:
# - "50000:50000" # jenkins nodes
- "8686:8080" # jenkins UI
volumes:
- myjob_volume:/var/jenkins_home/workspace/myjob
apache:
container_name: httpd
image: httpd:2.2
volumes_from:
- jenkins
volumes:
myjob_volume:
I basically want the Jenkins container to fetch the code in a volume , which will then be visible by the Apache (httpd) container. So every change I make to the code from my IDE and pushed to Github, will be visible in the Apache container. The volume is created in the Apache container, but when I successfully build the code in the Jenkins container, it does not appear in the volume in Apache.
EDIT:
After launching the 2 containers with docker-compose up -d,
I enable their volumes from Kitematic
I change the volume path for Apache to point to the Jenkins volume
and when I build the code from Jenkins, Apache sees it as I would like.
So...how should I do the same from the docker-compose file ?
You are using volumes_from which "copies" the mount definition from the container you're specifying. As a result, the myjob_volume will be mounted at /var/jenkins_home/workspace/myjob inside the Apache container. The official Apache image from Docker hub (https://hub.docker.com/_/httpd/) uses /usr/local/apache2/htdocs/ as the webroot.
To mount the volume at that location, update the docker-compose file to look like this;
version: '2'
services:
jenkins:
container_name: jenkins
image: jenkins:v1
environment:
JAVA_OPTS: "-Djava.awt.headless=true"
JAVA_OPTS: "-Djenkins.install.runSetupWizard=false" # Start jenkins unlocked
ports:
# - "50000:50000" # jenkins nodes
- "8686:8080" # jenkins UI
volumes:
- myjob_volume:/var/jenkins_home/workspace/myjob
apache:
container_name: httpd
image: httpd:2.2
volumes:
- myjob_volume:/usr/local/apache2/htdocs/
volumes:
myjob_volume: