How to escape "{{" and "}}" in argo workflow - go-templates

I want to run one argo workflow in which a value is surrounded with double braces. Argo tries to resolve it but I don't want argo to resolve it.
Following is a fraction of katib studyjob workflow manifest.
workerSpec:
goTemplate:
rawTemplate: |-
apiVersion: "kubeflow.org/v1beta1"
kind: TFJob
metadata:
name: {{.WorkerID}}
namespace: kubeflow
Here argo tries to resolve {{.WorkerID}}. But I don't want argo to resolve it.
How can I do this? How can I escape "{{" and "}}"?

Using the {% raw %} tag:
{% raw %} {{.WorkerID}} {% endraw %}
Jinja2 Reference

You'd use a template literal, assuming you're using Helm templates
name: {{` {{.WorkerID}} `}}

Related

Adding ssl certificate to helm chart

I have added an ssl cert secret in rancher and configured the ingress file in the helm chart as follows:
{{- $fullName := include "api-chart.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $apiIngressPath := .Values.ingress.apiPath -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "api-chart.name" . }}
helm.sh/chart: {{ include "api-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: nginx
{{- with .Values.ingress.annotations }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: {{ .Values.ssl.certSecretName }}
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 80
- path: {{ $apiIngressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 8080
The default, fake, Nginx certificate is however still received when visiting the https site. Does the Nginx server also need to be changed? If so seems strange that it is a requirement to add the certificate info in two places. If not, what else could be wrong?
kubectl describe ingress gives the following response:
Name: my-test-install-app72-project-jupyter-labs
Namespace: default
Address: 10.240.0.4
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
project-jupyter-labs-2.company.com
/test72-new-user my-test-install-app72-project-jupyter-labs:80 (10.244.4.20:8888)
/base-url my-test-install-app72-project-jupyter-labs:8080 (10.244.4.20:8080)
Annotations:
field.cattle.io/publicEndpoints: [{"addresses":["10.240.0.4"],
"port":80,
"protocol":"HTTP",
"serviceName":"default:my-test-install-app72-project-jupyter-labs",
"ingressName":"default:my-test-install-app72-project-jupyter-labs",
"hostname":"project-jupyter-labs-2.company.com",
"path":"/test72-new-user",
"allNodes":false},
{"addresses":["10.240.0.4"],
"port":80,
"protocol":"HTTP",
"serviceName":"default:my-test-install-app72-project-jupyter-labs",
"ingressName":"default:my-test-install-app72-project-jupyter-labs",
"hostname":"project-jupyter-labs-2.company.com",
"path":"/base-url",
"allNodes":false}]
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: my-test-install-app72
meta.helm.sh/release-namespace: default
nginx.ingress.kubernetes.io/proxy-body-size: 2G
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 81s nginx-ingress-controller Ingress default/my-test-install-app72-project-jupyter-labs
Normal CREATE 81s nginx-ingress-controller Ingress default/my-test-install-app72-project-jupyter-labs
Normal UPDATE 23s (x2 over 23s) nginx-ingress-controller Ingress default/my-test-install-app72-project-jupyter-labs
Normal UPDATE 23s (x2 over 23s) nginx-ingress-controller Ingress default/my-test-install-app72-project-jupyter-labs
UPDATE:
I am having trouble accessing the error logs. It seems like you need to exec into the container as root to be able to see these. What I did find however is that the server section of the nginx.conf file contains the following:
ssl_certificate_by_lua_block {
certificate.call()
}
If I change this to ssl_certifacte and ssl_certifacte_key paths to the cert and key files that I manually added to the container, then it works.
Does the above ssl_certificate_by_lua_block look normal for the ingress.yaml file? If so, what else could be the problem? If not, what could be causing this to not be probably configured?
Applying the following patch seems to allow the correct SSL certificate to be made available for https:
kubectl patch ingress <app-instance-name> -p '{"spec":{"tls":[{"hosts":["project-jupyter-labs-2.company.com"], "secretName": "tls-secret-name"}]}}'
Why this solves the problem is still unclear to me. I would appreciate any possible explanations.
Applying the following patch seems to allow the correct SSL
certificate to be made available for https:
kubectl patch ingress <app-instance-name> -p '{"spec":{"tls":[{"hosts":["project-jupyter-labs-2.company.com"], "secretName": "tls-secret-name"}]}}'
Why this solves the problem is still unclear to me. I would
appreciate any possible explanations.
It's nearly impossible to deduce it, without having a minimal reproducible example from you. Have a look how should minimal reproducible example look like.
We know nothing about your resulting Ingress manifest file (generated by helm), Ingress Controller version and its configuration (including way of installation), and underlying Kubernetes environment.
Just few hints:
Please remember that Ingress/Secret resources are namespaced objects, and so in your case Ingress should reference secret from the same namespace.
How exactly do you create a TLS secret ?
I can assure you that your case can be reproduced in healthy Ingress Controller setup, and whenever I create secret referenced by Ingress in right namespace, it's automatically detected by controller, added to a local store, and dynamic reconfiguration takes place.
Lastly I think your issue is more suitable to be reported directly Nginx Ingress Controller github's project: https://github.com/kubernetes/ingress-nginx/issues/new

Variables in Kubernetes ConfigMaps

I'm currently working with some configmaps and I've noticed, that there are some documents in the configmap having redundant values/ referencing the same value e.g.
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
labels:
app: my-app
data:
some_file: |-
...
foo1=bar
...
some_other_file: |-
...
foo2=bar
...
Is it somehow possible, to use a variable instead of writing bar two times?
This way I wouldn't have to search every config file if bar changes at some point.
No, it's not possible.
If the problems gets worse, you can always start using kustomize or Helm, which allow you to create templates for your Kubernetes manifests, and use variables on those templates.

Drone self-hosted, pipeline routing between Drone servers

I have dev & prod kubernetes clusters with drone server in each. Both servers watching the same set of github repos.
I want to do smth like:
---
kind: pipeline
name: artifacts
drone_instance: dev # <--- magic routing
steps:
- ...
trigger:
event: tag
ref: refs/tags/dev-*
---
kind: pipeline
name: deploy_dev
drone_instance: dev # <--- magic routing
steps:
- ...
trigger:
event: tag
ref: refs/tags/dev-*
---
kind: pipeline
name: deploy_prod
drone_instance: prod # <--- magic routing
steps:
- ...
trigger:
event: tag
ref: refs/tags/prod-*
E.g. run different pipelines on different drone instances. I was looking at platform filter but it does not seem to be available in Kubernetes mode. Did anyone hack smth similar?
NOTE: corresponding gh thread https://github.com/drone/drone-runtime/issues/63
Got answer from drone.io team in Gitter:
I recommend using .drone.yml for prod, and then creating a
.drone.dev.yml for dev. In your dev Drone instance, in the repository
settings, point Drone at the .drone.dev.yml

Get minion ip in Saltstack Jinja template filtered by pillar

I'm currently busy creating a automated netdata ephemeral cluster, which means I have a master netdata node which the slaves connect to.
I've found a similar question + answer, but instead of using a grain I use pillars.
I'm trying to get the Netdata master ip, and distribute it to the minions running Netdata via a Template. But this can be applied to other master-slave configs as well (e.g. postgres, elasticsearch etc)
I'm assigning roles via pillars.
So my pillar file looks like:
server:
roles:
- netdata-master
- grafana
And my jinja template:
{% set netdatamaster = ..... %}
[stream]
# stream metrics to another netdata
enabled = yes
# the IP and PORT of the master
destination = {% netdatamaster %}:19999
Now I want the var netdatamaster to contain the ipv4 adres of the Netdata master. I just can't figure out a way to do this.
You can use the Salt Mine for this.
First add a mine_function to your netdata-master server. This can be configured in pillar or in the minion config file.
mine_functions:
eth0_ip_addrs:
mine_function: network.ip_addrs
interface: eth0
The above mine_function enables other minions to request the value of network.ip_addrs for your netdata-master server.
You can request this data in different ways:
From the cli:
salt 'other_minion_id' mine.get 'netstat-master_id' eth0_ip_addrs
In your state files:
{{ salt['mine.get']('netstat-master_id', 'eth0_ip_addrs') }}
In your case you can put it in the top of your Jinja template file.
{% set netdatamaster = salt['mine.get']('netstat-master_id', 'eth0_ip_addrs') %}

Defining Variable Configuration in _config.yml in jekyll powered website

There are multiple sets of configuration which you may want to execute when you are running the site locally while when your site is running on server (say Github).
I have defined similar set of configuration in my _config.yml file like this
title: Requestly
locale: en_US
description: Chrome Extension to modify HTTP(s) Requests
logo: site-logo.png
search: true
env: dev
config:
dev:
url: http://localhost:4000
prod:
url: http://requestly.github.io/blog
url: site.config[site.env].url // Does not work
I have used {{ site.url }} everywhere else in my templates, layouts and
posts.
How can I define site.url in my _config.yml file whose value depends upon the config and env defined in the same file.
PS: I know one of the ways is to change {{ site.url }} to {{ site.config[site.env].url }} in all the files. That should probably work.
I just want to know how to use variables in _config.yml. Is that even possible ?
No you cannot use variables in a _config file.
You can find more informations here : Change site.url to localhost during jekyll local development
Yes you can with Jekyll 3.8.0 or later version now. Please give that a try