I have an ASP .NET Core Web API app that is deployed on a docker linux container.
Now I need to upgrade my app configuration including an emails array, it is not a problem until I need to pass this array using docker-compose.yml.
This is the C# code I use to retrieve the configuration:
List<string> emails = _config.GetSection("Checker:EMails").Get<List<string>>();
These are my linux environments variables as explained by microsoft:
export Checker__RefreshTime="86400000"
export Checker__DaysToCheck="15"
#emails array
export Checker__EMails__0="my_email1#my.com"
export Checker__EMails__1="my_email2p#my.com"
docker-compose.yml file:
environment:
# the following line passes the host environment var to the container created from the image
# Checker configuration
- Checker__RefreshTime=${Checker__RefreshTime}
- Checker__DaysToCheck=${Checker__DaysToCheck}
How can I include the emails array dynamically without the necessity to change docker-compose.yml file every time? Because now I need to do something like:
environment:
# the following line passes the host environment var to the container created from the image
# Checker configuration
- Checker__RefreshTime=${Checker__RefreshTime}
- Checker__DaysToCheck=${Checker__DaysToCheck}
- Checker__EMails__0=${Checker__EMails__0}
- Checker__EMails__1=${Checker__EMails__1}
# etc..
You can use an file to store the environment variables and call the file from your docker-compose file.
Instead of:
environment:
- Checker__RefreshTime=${Checker__RefreshTime}
- Checker__DaysToCheck=${Checker__DaysToCheck}
- Checker__EMails__0=${Checker__EMails__0}
- Checker__EMails__1=${Checker__EMails__1}
use:
env_file: filename.env
Related
I just installed filebeat on my remote server to collect logs by an app. Everything seems OK. The ELK stack retrieves the info and I can view it via Kibana.
Today, I want to collect the logs generated by 2 webapps hosted on the same tomcat server. I want to be able to add a field to allow me to create a filter on it on Kibana
I am using the tomcat.yml module which I want to rename as webapp1.yml and webapp2.yml.
In each of these files, I will add a field that corresponds to the name of my webapp
webapp1.yml
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp1.log
var.rsa_fields: true
**var.rsa.misc.context: webapp1**
webapp2.yml
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp2.log
var.rsa_fields: true
**var.rsa.misc.context: webapp2**
But, logstash index do not recognized this new field context
How can i solve this ?
Thanks for help me
So, i find the solution...
- module: tomcat
log:
enabled: true
var.input: file
var.paths:
- c:\app\webapp1.log
# Toggle output of non-ECS fields (default true).
#var.rsa_fields: true
input:
processors:
- add_fields:
target: fields
fields:
application-name: webapp1
I have a server running Docker containers with Traefik. Let's say the machine's hostname is machine1.example.com, and each service runs as a subdomain, e.g. srv1.machine1.example.com, srv2.machine1.example.com, srv3.machine1.example.com....
I want to have LetsEncrypt generate a Wildcard certificate for *.machine1.example.com and use it for all of the services instead of generating a separate certificate for each service.
The annoyance is that I have to put the configuration lines into every single service's labels:
labels:
- traefik.http.routers.srv1.rule=Host(`srv1.machine1.example.com`)
- traefik.http.routers.srv1.tls=true
- traefik.http.routers.srv1.tls.certresolver=myresolver
- traefik.http.routers.srv1.tls.domains[0].main=machine1.example.com
- traefik.http.routers.srv1.tls.domains[0].sans=*.machine1.example.com
labels:
- traefik.http.routers.srv2.rule=Host(`srv2.machine1.example.com`)
- traefik.http.routers.srv2.tls=true
- traefik.http.routers.srv2.tls.certresolver=myresolver
- traefik.http.routers.srv2.tls.domains[0].main=machine1.example.com
- traefik.http.routers.srv2.tls.domains[0].sans=*.machine1.example.com
# etc.
This gets to be a lot of seemingly-needless boilerplate.
I tried work around it (in a way that is still ugly and annoying, but less so) by using the templating feature in the file provider like this:
[http]
[http.routers]
{{ range $i, $e := list "srv1" "srv2 }}
[http.routers."{{ $e }}".tls]
certResolver = "letsencrypt"
[[http.routers."{{ $e }}".tls.domains]]
main = "machine1.example.com"
sans = ["*.machine1.example.com"]
{{ end }}
That did not work because the routers created here are srv1#file, srv2#file instead of srv1#docker, srv2#docker which are created by the docker-compose configuration.
Is there any way to specify this configuration only once and have it apply to multiple services?
Heroku currently provides the database credentials as one connection string i.e: postgres://foo:bar#baz/fubarDb.
My development environment consists of a PostgreSQL container and an app container orchestrated with a docker-compose file. The docker-compose file supplies environment variables from .env file which currently looks a little like this:
DB_USER=foo
DB_PASS=bar
DB_HOST=baz
These credentials are passed to my app as environment variables and all is well. The problem is that in order to make this work, I have to have a null check in my db connection info to see if there is a full database URL and use that...otherwise fall back to individual credentials....
D'oh! So, it turns out the solution, was to modify my docker-compose file to take the environment variables and concatenate them into a connection string thusly:
diff --git a/docker-compose.yml b/docker-compose.yml
index 21f0d47..74a574e 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
## -16,14 +16,10 ## services:
context: .
target: base
environment:
- - APP_NAME
- - DB_HOST
- - DB_PORT
- - DB_NAME
- - DB_USER
- - DB_PASS
- - NODE_ENV
- - PORT
+ APP_NAME: ${APP_NAME}
+ NODE_ENV: ${NODE_ENV}
+ PORT: ${PORT}
+ DATABASE_URL: postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:${DB_PORT}/${DB_NAME}
user: "1000:1000"
ports:
- "127.0.0.1:${PORT}:${PORT}"
No more trying to guess which environment variables to look for!
I have few containers, brought up with docker-compose and I want to perform authentication on of the containers.
Below the piece that I assume should do that, but it doesn't go to the authentication-backend-nginx-private, directly lands on the mds-backend-nginx-private. I'm out of idea, what could be wrong about the config...
it works if authforward configured globally: in toml file under entrypoint section, but I want it to be per particular container..
mds-backend-nginx-private:
<<: *nginx-common
ports:
- 8186:80
networks:
- cloud_private
- mds-backend
restart: on-failure
environment:
- NGINX_SERVER_NAME=mds-backend-nginx-private
- WEBSITE_PROXY_NAME=mds-backend-web-private
- WEBSITE_PROXY_PORT=8000
labels:
- "traefik.http.middlewares.authf.ForwardAuth.Address=http://authentication-backend-nginx-private/api/v1/gateway/account?with_credentials=true"
- "traefik.docker.network=cloud_private"
- "traefik.http.routers.mds-backend.middlewares=authf"
- "traefik.frontend.rule=PathPrefix: /api/v1/mds/"```
Maybe, you are trying to use "middleware feature" with old traefik version.
Works in the toml file because you are using the "forward feature" present in old versions.
Check traefik tag image is equal or greater than 2.0
https://hub.docker.com/_/traefik
I have a set of parameters that needs to be initialized for elasticMQ sqs. Right now I have added in the controller as below.
sqs = RightAws::SqsGen2.new("ABCD","DEFG",{:server=>"localhost",:port=>9324,:protocol=>"http"})
what is the better way to set it in config folder and access it in controller and how to do it. Please help
Create a config file config/config.yml that will store the config variables for the different environments and load it in config/application.rb.
development:
elasticmq:
server: localhost
port: 9324
protocol: 'http'
production:
elasticmq:
server:
port:
protocol:
test:
In config/application.rb:
CONFIG = YAML.load_file("config/config.yml")[Rails.env]
The CONFIG variable is now available in the controller.So now you can do the following:
sqs = RightAws::SqsGen2.new("ABCD","DEFG",{:server=>"#{CONFIG['elasticmq']['server']}",:port=> "#{CONFIG['elasticmq']['port']}",:protocol=>"#{CONFIG['elasticmq']['protocol']}"})