Get minion ip in Saltstack Jinja template filtered by pillar - automation

I'm currently busy creating a automated netdata ephemeral cluster, which means I have a master netdata node which the slaves connect to.
I've found a similar question + answer, but instead of using a grain I use pillars.
I'm trying to get the Netdata master ip, and distribute it to the minions running Netdata via a Template. But this can be applied to other master-slave configs as well (e.g. postgres, elasticsearch etc)
I'm assigning roles via pillars.
So my pillar file looks like:
server:
roles:
- netdata-master
- grafana
And my jinja template:
{% set netdatamaster = ..... %}
[stream]
# stream metrics to another netdata
enabled = yes
# the IP and PORT of the master
destination = {% netdatamaster %}:19999
Now I want the var netdatamaster to contain the ipv4 adres of the Netdata master. I just can't figure out a way to do this.

You can use the Salt Mine for this.
First add a mine_function to your netdata-master server. This can be configured in pillar or in the minion config file.
mine_functions:
eth0_ip_addrs:
mine_function: network.ip_addrs
interface: eth0
The above mine_function enables other minions to request the value of network.ip_addrs for your netdata-master server.
You can request this data in different ways:
From the cli:
salt 'other_minion_id' mine.get 'netstat-master_id' eth0_ip_addrs
In your state files:
{{ salt['mine.get']('netstat-master_id', 'eth0_ip_addrs') }}
In your case you can put it in the top of your Jinja template file.
{% set netdatamaster = salt['mine.get']('netstat-master_id', 'eth0_ip_addrs') %}

Related

Kubernetes ingress custom JWT authentication cache key

We are leveraging Kubernetes ingress with external service JWT authentication using auth-url as a part of the ingress.
Now we want to use the auth-cache-key annotation to control the caching of JWT token. At current our external auth service just respond with 200/401 by looking at the token. All our components are backend micro-services with rest api. Incoming request may not be the UI request. How do we fill in the `auth-cache-key' for a JWT token coming in.
annotations:
nginx.ingress.kubernetes.io/auth-url: http://auth-service/validate
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
nginx.ingress.kubernetes.io/auth-cache-key: '$remote_user$http_authorization'
nginx.ingress.kubernetes.io/auth-cache-duration: '1m'
kubernetes.io/ingress.class: "nginx"
Looking at the example, $remote_user$http_authorization is specified as an example in K8s documentation. However not sure if $remote_user will be set in our case. Because this is not external basic auth. How do we decide on the auth cache key in case of this?
Not enough example/documentations exists around this.
Posting general answer as no further details and explanation provided.
It's true that there is not so much documentation around, so I decided to dig into NGINX Ingress source code.
The value set in annotation nginx.ingress.kubernetes.io/auth-cache-key is a variable $externalAuth.AuthCacheKey in code:
{{ if $externalAuth.AuthCacheKey }}
set $tmp_cache_key '{{ $server.Hostname }}{{ $authPath }}{{ $externalAuth.AuthCacheKey }}';
set $cache_key '';
As can see, $externalAuth.AuthCacheKey is used by variable $tmp_cache_key, which is encoded to base64 format and set as variable $cache_key using lua NGINX module:
rewrite_by_lua_block {
ngx.var.cache_key = ngx.encode_base64(ngx.sha1_bin(ngx.var.tmp_cache_key))
}
Then $cache_key is used to set variable $proxy_cache_key which defines a key for caching:
proxy_cache_key "$cache_key";
Based on the above code, we can assume that we can use any NGINX variable to set nginx.ingress.kubernetes.io/auth-cache-key annotation. Please note that some variables are only available if the corresponding module is loaded.
Example - I set following auth-cache-key annotation:
nginx.ingress.kubernetes.io/auth-cache-key: '$proxy_host$request_uri'
Then, on the NGINX Ingress controller pod, in the file /etc/nginx/nginx.conf there is following line:
set $tmp_cache_key '{my-host}/_external-auth-Lw-Prefix$proxy_host$request_uri';
If you will set auth-cache-key annotation to nonexistent NGINX variable, the NGINX will throw following error:
nginx: [emerg] unknown "nonexistent_variable" variable
It's up to you which variables you need.
Please check also following articles and topics:
A Guide to Caching with NGINX and NGINX Plus
external auth provider results in a lot of external auth requests

Sharing Acme configuration for multiple Traefik services

I have a server running Docker containers with Traefik. Let's say the machine's hostname is machine1.example.com, and each service runs as a subdomain, e.g. srv1.machine1.example.com, srv2.machine1.example.com, srv3.machine1.example.com....
I want to have LetsEncrypt generate a Wildcard certificate for *.machine1.example.com and use it for all of the services instead of generating a separate certificate for each service.
The annoyance is that I have to put the configuration lines into every single service's labels:
labels:
- traefik.http.routers.srv1.rule=Host(`srv1.machine1.example.com`)
- traefik.http.routers.srv1.tls=true
- traefik.http.routers.srv1.tls.certresolver=myresolver
- traefik.http.routers.srv1.tls.domains[0].main=machine1.example.com
- traefik.http.routers.srv1.tls.domains[0].sans=*.machine1.example.com
labels:
- traefik.http.routers.srv2.rule=Host(`srv2.machine1.example.com`)
- traefik.http.routers.srv2.tls=true
- traefik.http.routers.srv2.tls.certresolver=myresolver
- traefik.http.routers.srv2.tls.domains[0].main=machine1.example.com
- traefik.http.routers.srv2.tls.domains[0].sans=*.machine1.example.com
# etc.
This gets to be a lot of seemingly-needless boilerplate.
I tried work around it (in a way that is still ugly and annoying, but less so) by using the templating feature in the file provider like this:
[http]
[http.routers]
{{ range $i, $e := list "srv1" "srv2 }}
[http.routers."{{ $e }}".tls]
certResolver = "letsencrypt"
[[http.routers."{{ $e }}".tls.domains]]
main = "machine1.example.com"
sans = ["*.machine1.example.com"]
{{ end }}
That did not work because the routers created here are srv1#file, srv2#file instead of srv1#docker, srv2#docker which are created by the docker-compose configuration.
Is there any way to specify this configuration only once and have it apply to multiple services?

Ansible Different hosts, different action

with Ansible I need to copy a script in different clients/hosts, then I need to modify a line in the script. The line depends of the client and is not the same each times.
Each hosts have the same name. Each clients name is different.
Something like that:
lineinfile: >
state=present
dest=/path/to/myscript
line="/personal line
when: {{ clients/hosts }} is {{ client/host }}
As you can see, I have no idea about the way to proceed.
It sounds like there are some clients that have some specific hosts associated to them, and the line in this script will vary based on the client.
In that case, you should use group vars. I've included a simplified example below.
Set up your hosts file like this:
[client1]
host1
host2
[client2]
host3
host4
Use group variables like this:
File group_vars/client1:
variable_script_line: echo "this is client 1"
File group_vars/client2:
variable_script_line: echo "this is client 2"
Create a template file named yourscript.sh.j2:
#!/bin/bash
# {{ ansible_managed }}
script line 1
script line 2
# below is the line that should be dynamic
{{ variable_script_line }}
And then use the template module like this:
---
- hosts: all
tasks:
- name: Deploy script to remote hosts
template:
src: /path/to/yourscript.sh.j2
dest: /path/to/location/yourscript.sh
mode: 0755
Note that the path to your source template will be different if you're using a [role][1].
Ultimately, when the play is run on client1 vs client2, the content of the template will be written differently based on the variable (see more about variable scopes).

Defining Variable Configuration in _config.yml in jekyll powered website

There are multiple sets of configuration which you may want to execute when you are running the site locally while when your site is running on server (say Github).
I have defined similar set of configuration in my _config.yml file like this
title: Requestly
locale: en_US
description: Chrome Extension to modify HTTP(s) Requests
logo: site-logo.png
search: true
env: dev
config:
dev:
url: http://localhost:4000
prod:
url: http://requestly.github.io/blog
url: site.config[site.env].url // Does not work
I have used {{ site.url }} everywhere else in my templates, layouts and
posts.
How can I define site.url in my _config.yml file whose value depends upon the config and env defined in the same file.
PS: I know one of the ways is to change {{ site.url }} to {{ site.config[site.env].url }} in all the files. That should probably work.
I just want to know how to use variables in _config.yml. Is that even possible ?
No you cannot use variables in a _config file.
You can find more informations here : Change site.url to localhost during jekyll local development
Yes you can with Jekyll 3.8.0 or later version now. Please give that a try

How can I use the Chef JSON to set a redis and sidekiq configuration

I'm using AWS OpsWorks for a Rails application with Redis and Sidekiq and would like to do the following:
Override the maxmemory config for redis
Only run Redis & Sidekiq on a selected EC2 instance
My current JSON config only has the database.yml overrides:
{
"deploy": {
"appname": {
"database": {
"username": "user",
"password": "password",
"database": "db_production",
"host": "db.host.com",
"adapter": "mysql2"
}
}
}
}
Override the maxmemory config for redis
Take a look and see if your Redis cookbook of choice gives you an attribute to set that / provide custom config values. I know the main redisio one lets you set config value, as I do it on my stacks (I set the path to the on disk cache, I believe)
Only run Redis & Sidekiq on a selected EC2 instance
This part is easy: create a Layer for Redis (or Redis/Sidekiq) and add an instance to that layer.
Now, because Redis is on a different instance than your Rails server, you won't necessarily know what the IP address for your Redis server is. Especially since you'll probably want to use the internal EC2 IP address vs the public IP address for the box (using the internal address means you're already inside the default firewall).
Sooo... what you'll probably need to do is to write a custom cookbook for your app, if you haven't already. In your attributes/default.rb write some code like this:
redis_instance_details = nil
redis_stack_name = "REDIS"
redis_instance_name, redis_instance_details = node["opsworks"]["layers"][redis_stack_name]["instances"].first
redis_server_dns = "127.0.0.1"
if redis_instance_details
redis_server_dns = redis_instance_details["private_dns_name"]
end
Then later in the attributes file set your redis config to your redis_hostname (maybe using it to set:
default[:deploy][appname][:environment_variables][:REDIS_URL] = "redis://#{redis_server_dns}:#{redis_port_number}"
Hope this helps!