Rename environment variables - variables

Heroku currently provides the database credentials as one connection string i.e: postgres://foo:bar#baz/fubarDb.
My development environment consists of a PostgreSQL container and an app container orchestrated with a docker-compose file. The docker-compose file supplies environment variables from .env file which currently looks a little like this:
DB_USER=foo
DB_PASS=bar
DB_HOST=baz
These credentials are passed to my app as environment variables and all is well. The problem is that in order to make this work, I have to have a null check in my db connection info to see if there is a full database URL and use that...otherwise fall back to individual credentials....

D'oh! So, it turns out the solution, was to modify my docker-compose file to take the environment variables and concatenate them into a connection string thusly:
diff --git a/docker-compose.yml b/docker-compose.yml
index 21f0d47..74a574e 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
## -16,14 +16,10 ## services:
context: .
target: base
environment:
- - APP_NAME
- - DB_HOST
- - DB_PORT
- - DB_NAME
- - DB_USER
- - DB_PASS
- - NODE_ENV
- - PORT
+ APP_NAME: ${APP_NAME}
+ NODE_ENV: ${NODE_ENV}
+ PORT: ${PORT}
+ DATABASE_URL: postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:${DB_PORT}/${DB_NAME}
user: "1000:1000"
ports:
- "127.0.0.1:${PORT}:${PORT}"
No more trying to guess which environment variables to look for!

Related

Icinga2 event plugin command starting a rundeck job via api

i made myself a test environment in icinga2 with a tomcat server. I would like to combine the two softwares rundeck and icinga. My idea is to start a rundeck job, when icinga detects a problem. In my case I have a tomcat server, where i fill up the swap memory, which should start the rundeck job to clear the swap.
I am using the Icinga2 Director for managing. I created an event plugin command, which should execute the rundeck api command as a script, called "rundeckapi". It looks like this:
#/usr/lib64/nagios/plugins/rundeckapi
#!/bin/bash
curl --location --request POST 'rundeck-server:4440/api/38/job/9f04657a-eaab-4e79-a5f3-00d3053f6cb0/run' \
--header 'X-Rundeck-Auth-Token: GuaoD6PtH5BhobhE3bAPo4mGyfByjNya' \
--header 'Content-Type: application/json' \
--header 'Cookie: JSESSIONID=node01tz8yvp4gjkly8kpj18h8u5x42.node0' \
--data-raw '{
"options": {
"IP":"192.168.4.13"
}
}'
(I also tried to just paste the command in the command field in the director, but this didn't work either.)
I placed it in the /usr/lib64/nagios/plugins/ directory and set the configuration in icinga for the command as following:
#zones.d/director-global/command.conf
object EventCommand "SWAP clear" {
import "plugin-event-command"
command = [ PluginDir + "/rundeckapi" ]
}
The service template looks like this:
#zones.d/master/service_templates.conf
template Service "SWAP" {
check_command = "swap"
max_check_attempts= "5"
check_interval = 1m
retry_interval = 15s
check_timeout = 10s
enable_notifications = true
enable_active_checks = true
enable_passive_checks = true
enable_event_handler = true
enable_flapping = true
enable_perfdata = true
event_command = "SWAP clear"
command_endpoint = host_name
}
Then I added the service to the host.
I enabled the debug mode and started to fill the SWAP and watched at the debug.log, with tail -f /var/log/icinga2/debug.log | grep 'event handler' and found this:
notice/Checkable: Executing event handler 'SWAP clear' for checkable 'centos_tomcat_3!SWAP'
The centos_tomcat_3 is the host for testing. IT seems like the event handler is executing the the script, but when I look at the rundeck server, i can't find a running job. When i start the rundeckapi script manually it works and i can see the job on rundeck.
I already read the documentation from icinga, but i didn't help.
I would be very thankful if someone could help me.
Thanks in advance.
Define the plugin as an event handler and assign it to the host.
I tested using this docker environment modified with Rundeck official image + a NGINX host:
version: '2'
services:
icinga2:
#image: jordan/icinga2
build:
context: ./
dockerfile: Dockerfile
restart: on-failure:5
# Set your hostname to the FQDN under which your
# sattelites will reach this container
hostname: icinga2
env_file:
- secrets_sql.env
environment:
- ICINGA2_FEATURE_GRAPHITE=1
# Important:
# keep the hostname graphite the same as
# the name of the graphite docker-container
- ICINGA2_FEATURE_GRAPHITE_HOST=graphite
- ICINGA2_FEATURE_GRAPHITE_PORT=2003
- ICINGA2_FEATURE_GRAPHITE_URL=http://graphite
# - ICINGA2_FEATURE_GRAPHITE_SEND_THRESHOLDS=true
# - ICINGA2_FEATURE_GRAPHITE_SEND_METADATA=false
- ICINGAWEB2_ADMIN_USER=admin
- ICINGAWEB2_ADMIN_PASS=admin
#- ICINGA2_USER_FULLNAME=Icinga2 Docker Monitoring Instance
- DEFAULT_MYSQL_HOST=mysql
volumes:
- ./data/icinga/cache:/var/cache/icinga2
- ./data/icinga/certs:/etc/apache2/ssl
- ./data/icinga/etc/icinga2:/etc/icinga2
- ./data/icinga/etc/icingaweb2:/etc/icingaweb2
- ./data/icinga/lib/icinga:/var/lib/icinga2
- ./data/icinga/lib/php/sessions:/var/lib/php/sessions
- ./data/icinga/log/apache2:/var/log/apache2
- ./data/icinga/log/icinga2:/var/log/icinga2
- ./data/icinga/log/icingaweb2:/var/log/icingaweb2
- ./data/icinga/log/mysql:/var/log/mysql
- ./data/icinga/spool:/var/spool/icinga2
# Sending e-mail
# See: https://github.com/jjethwa/icinga2#sending-notification-mails
# If you want to enable outbound e-mail, edit the file mstmp/msmtprc
# and configure to your corresponding mail setup. The default is a
# Gmail example but msmtp can be used for any MTA configuration.
# Change the aliases in msmtp/aliases to your recipients.
# Then uncomment the rows below
# - ./msmtp/msmtprc:/etc/msmtprc:ro
# - ./msmtp/aliases:/etc/aliases:ro
ports:
- "80:80"
- "443:443"
- "5665:5665"
graphite:
image: graphiteapp/graphite-statsd:latest
container_name: graphite
restart: on-failure:5
hostname: graphite
volumes:
- ./data/graphite/conf:/opt/graphite/conf
- ./data/graphite/storage:/opt/graphite/storage
- ./data/graphite/log/graphite:/var/log/graphite
- ./data/graphite/log/carbon:/var/log/carbon
mysql:
image: mariadb
container_name: mysql
env_file:
- secrets_sql.env
volumes:
- ./data/mysql/data:/var/lib/mysql
# If you have previously used the container's internal DB use:
#- ./data/icinga/lib/mysql:/var/lib/mysql
rundeck:
image: rundeck/rundeck:3.3.12
hostname: rundeck
ports:
- '4440:4440'
nginx:
image: nginx:alpine
hostname: nginx
ports:
- '81:80'
Rundeck side:
To access Rundeck open a new tab in your browser using the http://localhost:4440 web address. You can access with user: admin and password: admin.
Create a new project and create a new job, I created the following one, you can import it to your instance:
- defaultTab: nodes
description: ''
executionEnabled: true
id: c3e0860c-8f69-42f9-94b9-197d0706a915
loglevel: INFO
name: RestoreNGINX
nodeFilterEditable: false
options:
- name: opt1
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "hello ${option.opt1}"
keepgoing: false
strategy: node-first
uuid: c3e0860c-8f69-42f9-94b9-197d0706a915
Now go to the User Icon (up to right) > Profile, now click on the + icon ("User API Tokens" section) and save the API key string, useful to create the API call script from the Icinga2 container.
Go to the Activity page (left menu) and click on the "Auto Refresh" checkbox.
Incinga2 side:
You can enter Icinga 2 by opening a new tab in your browser using the http://localhost URL, I defined username: admin and password: admin in the docker-compose file.
Add the following script as a command at /usr/lib/nagios/plugins path with the following content (it's a curl api call like your scenario, the API key is the same generated in the third step from "Rundeck side" section of this step-by-step):
#!/bin/bash
curl --location --request POST 'rundeck:4440/api/38/job/c3e0860c-8f69-42f9-94b9-197d0706a915/run' --header 'X-Rundeck-Auth-Token: Zf41wIybwzYhbKD6PrXn01ZMsV2aT8BR' --header 'Content-Type: application/json' --data-raw '{ "options": { "opt1": "world" } }'
Also make the script executable: chmod +x /usr/lib/nagios/plugin/restorenginx
In the Icinga2 browser tab, go to the Icinga Director (Left Menu) > Commands. On the "Command Type" list select "Event Plugin Command", on the "Command Name" textbox type "restorenginx" and on the "Command" textbox put the full path of the script (/usr/lib/nagios/plugins/restorenginx). Then click on the "Store" button (bottom) and now click on Deploy (up).
Check how looks.
This is the config preview (at zones.d/director-global/commands.conf):
object EventCommand "restorenginx" {
import "plugin-event-command"
command = [ "/usr/lib/nagios/plugins/restorenginx" ]
}
Now, create the host template (In my example I'm using an Nginx container to monitoring), go to Icinga Director (Left Menu) > Hosts, and select "Host Templates". Then click on the + Add link (up). On the Name type the host template name, I used "nginxSERVICE", on the "check command" textbox put the command to check the host alive (I used "ping"). Now in the Event command textbox select the Command created in the previous step.
Check how looks.
Now it's time to create the host (based on the previous steps template). Go to Icinga Direcrector (Left Menu) > Hosts and select "Host". Then click on the + Add link (up). On the hostname type the server hostname (nginx, defined on the docker-compose file), In "Imports" select the template is created in the previous step ("nginxSERVICE"), type anything on the "Display name" textbox, and in the "Host address" add the Nginx container IP. Click on the "Store" button" and then on the "Deploy" link at the top.
Check how looks.
To enable the Event Hander on the host, go to the "Overview" (Left menu) > Hosts, select "NGINX", scroll down on the right section and enable "Event Handler" on the "Feature Commands" section.
Check how looks.
Nginx side (it's time to test the script):
Stop the container and go to the Rundeck Activity page browser tab, you'll see the job launched by the Icinga2 monitoring tool.

How to pass environment array configuration properties with docker-compose dynamically?

I have an ASP .NET Core Web API app that is deployed on a docker linux container.
Now I need to upgrade my app configuration including an emails array, it is not a problem until I need to pass this array using docker-compose.yml.
This is the C# code I use to retrieve the configuration:
List<string> emails = _config.GetSection("Checker:EMails").Get<List<string>>();
These are my linux environments variables as explained by microsoft:
export Checker__RefreshTime="86400000"
export Checker__DaysToCheck="15"
#emails array
export Checker__EMails__0="my_email1#my.com"
export Checker__EMails__1="my_email2p#my.com"
docker-compose.yml file:
environment:
# the following line passes the host environment var to the container created from the image
# Checker configuration
- Checker__RefreshTime=${Checker__RefreshTime}
- Checker__DaysToCheck=${Checker__DaysToCheck}
How can I include the emails array dynamically without the necessity to change docker-compose.yml file every time? Because now I need to do something like:
environment:
# the following line passes the host environment var to the container created from the image
# Checker configuration
- Checker__RefreshTime=${Checker__RefreshTime}
- Checker__DaysToCheck=${Checker__DaysToCheck}
- Checker__EMails__0=${Checker__EMails__0}
- Checker__EMails__1=${Checker__EMails__1}
# etc..
You can use an file to store the environment variables and call the file from your docker-compose file.
Instead of:
environment:
- Checker__RefreshTime=${Checker__RefreshTime}
- Checker__DaysToCheck=${Checker__DaysToCheck}
- Checker__EMails__0=${Checker__EMails__0}
- Checker__EMails__1=${Checker__EMails__1}
use:
env_file: filename.env

Portainer doesn't show icons anymore since upgrading to v2 (Traefik Proxy)

Since upgrading to Portainer v2, the icons would suddenly not load anymore. I can still access Portainer (which is proxied by Traefik), but after a bit of testing, I noticed, only / would be forwarded. If a path was given, Traefik would throw a 404 error. This is a problem because Portainer loads the fonts from eg. /b15db15f746f29ffa02638cb455b8ec0.woff2.
There is one issue about this on Github, but I don't really know what to do with that information: https://github.com/portainer/portainer/issues/3706
My Traefik configuration
version: "2"
# Manage domain access to services
services:
traefik:
container_name: traefik
image: traefik
command:
- --api.dashboard=true
- --certificatesresolvers.le.acme.email=${ACME_EMAIL}
- --certificatesresolvers.le.acme.storage=acme.json
# Enable/Disable staging by commenting/uncommenting the next line
# - --certificatesresolvers.le.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
- --certificatesresolvers.le.acme.dnschallenge=true
- --certificatesresolvers.le.acme.dnschallenge.provider=cloudflare
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --global.sendAnonymousUsage
- --log.level=INFO
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=traefik_proxy
restart: always
networks:
- traefik_proxy
ports:
- "80:80"
- "443:443"
dns:
- 1.1.1.1
- 1.0.0.1
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./acme.json:/acme.json
# - ./acme-staging.json:/acme.json
environment:
CF_API_EMAIL: ${CLOUDFLARE_EMAIL}
CF_API_KEY: ${CLOUDFLARE_API_KEY}
labels:
- traefik.enable=true
- traefik.http.routers.traefik0.entrypoints=http
- traefik.http.routers.traefik0.rule=Host(`${TRAEFIK_URL}`)
- traefik.http.routers.traefik0.middlewares=to_https
- traefik.http.routers.traefik.entrypoints=https
- traefik.http.routers.traefik.rule=Host(`${TRAEFIK_URL}`)
- traefik.http.routers.traefik.middlewares=traefik_auth
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.tls.certresolver=le
- traefik.http.routers.traefik.service=api#internal
# Declaring the user list
#
# Note: all dollar signs in the hash need to be doubled for escaping.
# To create user:password pair, it's possible to use this command:
# echo $(htpasswd -nb user password) | sed -e s/\\$/\\$\\$/g
- traefik.http.middlewares.traefik_auth.basicauth.users=${TRAEFIK_USERS}
# Standard middleware for other containers to use
- traefik.http.middlewares.to_https.redirectscheme.scheme=https
- traefik.http.middlewares.to_https_perm.redirectscheme.scheme=https
- traefik.http.middlewares.to_https_perm.redirectscheme.permanent=true
networks:
traefik_proxy:
external: true
And my Portainer configuration
version: "2"
# Manage docker containers
services:
portainer:
container_name: portainer
image: portainer/portainer-ce
restart: always
networks:
- traefik_proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data/:/data/
labels:
- traefik.enable=true
- traefik.http.services.portainer.loadbalancer.server.port=9000
- traefik.http.routers.portainer0.entrypoints=http
- traefik.http.routers.portainer0.rule=Host(`${PORTAINER_URL}`)
- traefik.http.routers.portainer0.middlewares=to_https
- traefik.http.routers.portainer.entrypoints=https
- traefik.http.routers.portainer.rule=Host(`${PORTAINER_URL}`)
- traefik.http.routers.portainer.tls=true
- traefik.http.routers.portainer.tls.certresolver=le
networks:
traefik_proxy:
external: true
What do I have to change to make Traefik be able to forward the paths so that Portainer can load the icons?
Could you try flush your DNS Cache?
In Chrome 'chrome://net-internals/#dns' into URL bar and pressed enter.
Then click on 'Clear host cache'
Then refresh your portainer page
I noticed that there is also an Alpine version of Portainer.
After switching to that (image: portainer/portainer-ce:alpine), the icons seem to be working again. I don't know what the issue is with the regular image, but this solves it for now.
PS: I had tried to use the Access-Control header on Traefik, but that didn't help. I guess it's a problem with Portainer's code itself.
If someone else is facing this issue, I resolved this by deleting my Browser Cache or just do a full Refresh with CTRL+Shift+R

Include variables generated in first play of playbook

I have a playbook which consists of two plays:
1: Create inventory file and variables file on localhost
2: Use the variables in commands on generated inventory
Example playbook:
---
- name: Generating inventory and variables
hosts: localhost
vars_files:
- variables.yml #variables file used for automating
tasks:
- name: Creating inventory template
template:
src: hosts.j2
dest: "./inventories/{{location}}/hosts"
mode: 0777
force: yes
ignore_errors: yes
run_once: true
- meta: refresh_inventory
- name: Creating predefined variables from a template
template:
src: predefined-vars.yml.j2
dest: "./variables/predefined-vars.yml"
- name: Setting vlan to network devices
remote_user: Ansible
hosts: all
vars_files:
- variables.yml #variables file used for automating.
- variables/predefined-vars.yml
tasks:
- name: configure Junos ROUTER for vlan
include_tasks: ./roles/juniper/tasks/add_vlan_rt.yml
when:
- inventory_hostname in groups['junos_routers']
- groups['junos_routers'] | length == 1
- location == inventory_name
This gives undefined variable error (for a variable created in the first play).
Is there a way to do this? I use this for generating variables like router_port_name and so on - the variables depend on location and dedicated server, which are defined in variables.yml
Any help is really appreciated.
Thanks
EDIT: However, I have noticed that this playbook:
---
- hosts: localhost
gather_facts: false
name: 1
vars_files:
- variables.yml
tasks:
- name: Creating predefined variables from a template
template:
src: predefined-vars.yml.j2
dest: "./variables/predefined-vars.yml"
- name: Generate hosts file
hosts: all
vars_files:
- variables.yml
- ./variables/predefined-vars.yml
tasks:
- name: test
debug: msg="{{ router_interface_name }}"
show the variables created in the first play.
The difference I see is that the first playbook reads all variable files (even predefined-vars.yml <- created at first play, used at the other) used in the playbook at the start of the first play (generating inventory and creating variable file) while the second playbook reads variables.yml in first play and only at the start of the second play reads the predefined-vars.yml .
Any Ideas how to make the first playbook behave the same way?
So I have found the solution to the problem, based on the documentation and suggestions from other people.
What I understood about the problem:
A playbook will read all the variables (of all plays) provided into the cache for later use, so if I include my predefined-vars.yml into vars_files, then after changing it in first play, the changes will not be used by later plays because they will use cache for that.
Thus I had to create another task in second play, which would read (load into cache) my newly generated file (for that play):
- name: Include predefined vars
include_vars: ./variables/predefined-vars.yml
run_once: true
Hope this helps you!
Still have no idea why second play shows the variables...

problem configuring traefik's ForwardAuth middleware

I have few containers, brought up with docker-compose and I want to perform authentication on of the containers.
Below the piece that I assume should do that, but it doesn't go to the authentication-backend-nginx-private, directly lands on the mds-backend-nginx-private. I'm out of idea, what could be wrong about the config...
it works if authforward configured globally: in toml file under entrypoint section, but I want it to be per particular container..
mds-backend-nginx-private:
<<: *nginx-common
ports:
- 8186:80
networks:
- cloud_private
- mds-backend
restart: on-failure
environment:
- NGINX_SERVER_NAME=mds-backend-nginx-private
- WEBSITE_PROXY_NAME=mds-backend-web-private
- WEBSITE_PROXY_PORT=8000
labels:
- "traefik.http.middlewares.authf.ForwardAuth.Address=http://authentication-backend-nginx-private/api/v1/gateway/account?with_credentials=true"
- "traefik.docker.network=cloud_private"
- "traefik.http.routers.mds-backend.middlewares=authf"
- "traefik.frontend.rule=PathPrefix: /api/v1/mds/"```
Maybe, you are trying to use "middleware feature" with old traefik version.
Works in the toml file because you are using the "forward feature" present in old versions.
Check traefik tag image is equal or greater than 2.0
https://hub.docker.com/_/traefik