How to configure the logger of schema registry image with helm chart deployment - confluent-schema-registry

I use schema registry 5.4 - https://hub.docker.com/r/confluentinc/cp-schema-registry/, and use helm chart https://github.com/helm/charts/tree/master/incubator/schema-registry to deploy.
To set my logger, I create a configmap and mount it to the schema registry container that uses the image -confluentinc/cp-schema-registry:5.4.0
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "schema-registry.fullname" . }}-log4j-configmap
labels:
app: {{ template "schema-registry.name" . }}
chart: {{ template "schema-registry.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
log4j.properties: |+
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.encoding=UTF-8
But I don't know what should be changed in the command of the container, or there is a variable pointing to log4j.properties to set it?
for example I plan to change the command like the following:
command:
- java
- -Dlog4j.configuration=file:/etc/schema-registry-log4j/log4j.properties
- -jar
- schema-registry.jar

Option 1:
One way I found is to build a new image using FROM and put my custom log4j properties there.
https://docs.confluent.io/current/installation/docker/development.html#log-to-external-volumes
Option 2:
The other way is to set an environment variable
SCHEMA_REGISTRY_LOG4J_LOGGERS="org.apache.kafka=ERROR,io.confluent.rest.exceptions=FATAL"

Related

Custom labels/constants in Alertmanager config

I have an Alertmanager config file which defines many alerts. All of these alerts use the same time range in their Prometheus expressions and messaging. Is there way I can define a constant once in the file, which is then used throughout. That way, if I ever change the value of this constant, it's only required to be changed in one place?
Here's an illustration of what I am trying to achieve:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-prometheus-stack-services
namespace: monitoring
labels:
app: kube-prometheus-stack
release: kube-prometheus-stack
customer_variables:
time_range: 5m # CUSTOM VARIABLE DEFINED HERE
spec:
groups:
- name: Service1
rules:
- alert: Service1Alert1
annotations:
summary: "Some summary of alert1"
message: "Some error rate has increased to {{ $value }}% in the last {{ $customer_variables.time_range }}." # CUSTOM VARIABLE USED HERE
expr: # some expression using my_metric_1{label="..."}[{{ $customer_variables.time_range }}] # CUSTOM VARIABLE USED HERE
for: {{ $customer_variables.time_range }} # CUSTOM VARIABLE USED HERE
labels:
severity: critical
- alert: Service1Alert2
annotations:
summary: "Some summary of alert2"
message: "Some other error rate has increased to {{ $value }}% in the last {{ $customer_variables.time_range }}." # CUSTOM VARIABLE USED HERE
expr: # some expression using my_metric_1{label="..."}[{{ $customer_variables.time_range }}] # CUSTOM VARIABLE USED HERE
for: {{ $customer_variables.time_range }} # CUSTOM VARIABLE USED HERE
labels:
severity: critical

helm config map from external yaml file

I want to update my helm dependencies with configurations , declared in central folder ,among microservices .
having the following tree of folders
- config-repo
- application.yml
- specific.yml
- kubernetes
- helm
- common
- components
- microservice#1 (templates relating to )
- config-repo
- application.yml
- specefic.yml
- templates
- configmap_from_file.yaml
- values.yaml
- Chart.yaml
- microservice#2
- ...
here is the template of microservice#1 configmap_from_file.yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "common.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "common.name" . }}
helm.sh/chart: {{ include "common.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
{{ (.Files.Glob "config-repo/*").AsConfig | indent 2 }}
{{- end -}}
and inside of microservice#1's config-repo files are
specific.yml
../../../../../config-repo/specific.yml
application.yml
../../../../../config-repo/application.yml
they both are just reference to other files
when I command helm dep update . and then helm template . -s templates/configmap_from_file.yaml
I expect the following output
apiVersion: v1
kind: ConfigMap
metadata:
name: review
labels:
app.kubernetes.io/name: review
helm.sh/chart: review-1.0.0
app.kubernetes.io/managed-by: Helm
data:
application.yml: CONTENTS OF THE FILE IN ADDRESS
specific.yml: CONTENTS OF THE FILE IN ADDRESS
but the following is appeared
apiVersion: v1
kind: ConfigMap
metadata:
name: review
labels:
app.kubernetes.io/name: review
helm.sh/chart: review-1.0.0
app.kubernetes.io/managed-by: Helm
data:
application.yml: ../../../../../config-repo/application.yml
specific.yml: ../../../../../config-repo/specific.yml
why just address is injected and not the content are appeared

Drone template not triggering build

Following is how our.drone.yml looks like (and template also listed below) this an example configuration very much how we want in our production. The reason we are using a template is that our staging and production have similar configurations with values different in them(hence circuit template). And we wanted to remove duplication using the template circuit.yaml.
But currently, we are unable to do so df I don’t define the test.yaml(template) and have test step imported without template (and have the circuit template define to avoid the duplicate declaration of staging and production build) the drone build fails with
"template converter: template name given not found
If I define the test step as a template. I see the test step working but on creating the tag I see the following error
{"commit":"28ac7ad3a01728bd1e9ec2992fee36fae4b7c117","event":"tag","level":"info","msg":"trigger: skipping build, no matching pipelines","pipeline":"test","ref":"refs/tags/v1.4.0","repo":"meetme2meat/drone-example","time":"2022-01-07T19:16:15+05:30"}
---
kind: template
load: test.yaml
data:
commands:
- echo "machine github.com login $${GITHUB_LOGIN} password $${GITHUB_PASSWORD}" > /root/.netrc
- chmod 600 /root/.netrc
- go clean -testcache
- echo "Running test"
- go test -race ./...
---
kind: template
load: circuit.yaml
data:
deploy: deploy
create_tags:
commands:
- echo "Deploying version $DRONE_SEMVER"
- echo -n "$DRONE_SEMVER,latest" > .tags
backend_image:
version: ${DRONE_SEMVER}
tags:
- '${DRONE_SEMVER}'
- latest
And the template is below
test.yaml
kind: pipeline
type: docker
name: test
steps:
- name: test
image: golang:latest
environment:
GITHUB_LOGIN:
from_secret: github_username
GITHUB_PASSWORD:
from_secret: github_token
commands:
{{range .input.commands }}
- {{ . }}
{{end}}
volumes:
- name: deps
path: /go
- name: build
image: golang:alpine
commands:
- go build -v -o out .
volumes:
- name: deps
path: /go
volumes:
- name: deps
temp: {}
trigger:
branch:
- main
event:
- push
- pull_request
circuit.yaml
kind: pipeline
type: docker
name: {{ .input.deploy }}
steps:
- name: create-tags
image: alpine
commands:
{{range .input.create_tags.commands }}
- {{ . }}
{{end}}
- name: build
image: plugins/docker
environment:
GITHUB_LOGIN:
from_secret: github_username
GITHUB_PASSWORD:
from_secret: github_token
VERSION: {{ .input.backend_image.version }}
SERVICE: circuits
settings:
auto_tag: false
repo: ghcr.io/meetme2meat/drone-ci-example
registry: ghcr.io

Conftest Fails For a Valid Kubernetets YAML File

I have the following simple Kubernetes YAML Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
namespace: {{ .Values.app.namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.app.name }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.app.name }}
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.plantSimulatorService.image.repository }}:{{ .Values.plantSimulatorService.image.tag }}
ports:
- containerPort: {{ .Values.plantSimulatorService.ports.containerPort }} # Get this value from ConfigMap
I have the following in my test.rego:
package main
import data.kubernetes
name = input.metadata.name
deny[msg] {
kubernetes.is_deployment
not input.spec.template.spec.securityContext.runAsNonRoot
msg = sprintf("Containers must not run as root in Deployment %s", [name])
}
When I ran this using the following command:
joesan#joesan-InfinityBook-S-14-v5:~/Projects/Private/infrastructure-projects/plant-simulator-deployment$ helm conftest helm-k8s -p test
WARN - Found service plant-simulator-service but services are not allowed
WARN - Found service plant-simulator-grafana but services are not allowed
WARN - Found service plant-simulator-prometheus but services are not allowed
FAIL - Containers must not run as root in Deployment plant-simulator
FAIL - Deployment plant-simulator must provide app/release labels for pod selectors
As you can see I'm indeed not running the container as root, but despite that I get this error message - Containers must not run as root in Deployment plant-simulator
Any ideas what the reason could be?
You need to add runAsNonRoot to your securityContext:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
runAsNonRoot: true
The rego rule is only able to validate Yaml structure - it is not clever enough to work out that your configuration is effectively running a non-root user.

Problems with Traefik/Keycloak (and Gatekeeper) in front of Kibana

I want to use Keycloak as a standard way of authenticating users to applications running in our Kubernetes clusters. One of the clusters is running the Elastic ECK component (v1.1.1) and we use the operator to deploy Elastic clusters and Kibana as a frontend. In order to keep things as simple as possible I’ve done the following.
Deployed Kibana
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: {{ .Values.kibana.name }}
namespace: {{ .Release.Namespace }}
annotations:
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
spec:
version: {{ .Values.kibana.version }}
count: {{ .Values.kibana.instances }}
elasticsearchRef:
name: {{ .Values.kibana.elasticCluster }}
namespace: {{ .Release.Namespace }}
podTemplate:
spec:
containers:
- name: kibana
env:
- name: SERVER_BASEPATH
value: {{ .Values.kibana.serverBasePath }}
resources:
requests:
{{- if not .Values.kibana.cpu.enableBurstableQoS }}
cpu: {{ .Values.kibana.cpu.requests }}
{{- end }}
memory: {{ .Values.kibana.memory.requests }}Gi
limits:
{{- if not .Values.kibana.cpu.enableBurstableQoS }}
cpu: {{ .Values.kibana.cpu.limits }}
{{- end }}
memory: {{ .Values.kibana.memory.limits }}Gi
http:
tls:
selfSignedCertificate:
disabled: true
Created Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: kibana-{{ .Values.kibana.name }}-stripprefix
namespace: {{ .Release.Namespace }}
spec:
stripPrefix:
prefixes:
- {{ .Values.kibana.serverBasePath }}
forceSlash: true
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Values.kibana.name }}-ingress
namespace: {{ .Release.Namespace }}
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: http
traefik.ingress.kubernetes.io/router.middlewares: {{ .Release.Namespace }}-kibana-{{ .Values.kibana.name }}-stripprefix#kubernetescrd
spec:
rules:
- http:
paths:
- path: {{ .Values.kibana.serverBasePath }}
backend:
servicePort: {{ .Values.kibana.port }}
serviceName: {{ .Values.kibana.name }}-kb-http
Result
Deploying the above works perfectly fine. I’m able to reach the Kibana UI using the external IP exposed by our MetalLB component. I simply enter http://external IP/service/logging/kibana and I’m presented to the Kibana log in screen and I can log on using the “built in” authentication process.
Adding the Keycloak Gatekeeper
Now, if I add the following to the Kibana manifest, effectively adding the Keycloak Gatekeeper sidecar to the Kibana Pod:
- name: {{ .Values.kibana.name }}-gatekeeper
image: "{{ .Values.kibana.keycloak.gatekeeper.repository }}/docker-r/keycloak/keycloak-gatekeeper:{{ .Values.kibana.keycloak.gatekeeper.version }}"
args:
- --config=/etc/keycloak-gatekeeper.conf
ports:
- containerPort: 3000
name: proxyport
volumeMounts:
- name: gatekeeper-config
mountPath: /etc/keycloak-gatekeeper.conf
subPath: keycloak-gatekeeper.conf
volumes:
- name: gatekeeper-config
configMap:
name: {{ .Release.Name }}-gatekeeper-config
with the following ConfigMap which is "mounted":
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-gatekeeper-config
namespace: {{ .Release.Namespace }}
data:
keycloak-gatekeeper.conf: |+
redirection-url: {{ .Values.kibana.keycloak.gatekeeper.redirectionUrl }}
discovery-url: https://.../auth/realms/{{ .Values.kibana.keycloak.gatekeeper.realm }}
skip-openid-provider-tls-verify: true
client-id: kibana
client-secret: {{ .Values.kibana.keycloak.gatekeeper.clientSecret }}
enable-refresh-tokens: true
encryption-key: ...
listen: :3000
tls-cert:
tls-private-key:
secure-cookie: false
upstream-url: {{ .Values.kibana.keycloak.gatekeeper.upstreamUrl }}
resources:
- uri: /*
groups:
- kibana
The upstream-url points to http://127.0.0.1:5601
and add an intermediary service:
In order to explicitly address the Gatekeeper proxy I added another service, “keycloak-proxy” as such:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.kibana.name }}-keycloak-proxy
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
common.k8s.elastic.co/type: kibana
kibana.k8s.elastic.co/name: cap-logging
ports:
- name: http
protocol: TCP
port: 8888
targetPort: proxyport
and change the backend definition in the Kibana definition to:
servicePort: 8888
serviceName: {{ .Values.kibana.name }}-keycloak-proxy
and then issue the same URL as above, http://external IP/service/logging/kibana, I’m redirected to http://external IP/oauth/authorize?state=0db97b79-b980-4cdc-adbe-707a5e37df1b and get an “404 Page not found” error.
If I reconfigure the “keycloak-proxy” service and convert it into a NodePort and expose it on, say, port 32767 and issue an http://host IP:32767 I’m presented to the Keycloak login screen on the Keycloak server!
If I look into the Gatekeeper startup log I find the following:
1.6018108005048046e+09 info starting the service {"prog": "keycloak-gatekeeper", "author": "Keycloak", "version": "7.0.0 (git+sha: f66e137, built: 03-09-2019)"}
1.6018108005051787e+09 info attempting to retrieve configuration discovery url {"url": "https://.../auth/realms/...", "timeout": "30s"}
1.601810800537417e+09 info successfully retrieved openid configuration from the discovery
1.6018108005392597e+09 info enabled reverse proxy mode, upstream url {"url": "http://127.0.0.1:5601"}
1.6018108005393562e+09 info using session cookies only for access and refresh tokens
1.6018108005393682e+09 info protecting resource {"resource": "uri: /*, methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT,TRACE, required: authentication only"}
1.6018108005398147e+09 info keycloak proxy service starting {"interface": ":3000"}
This is what I get when I try to access Kibana through the Gatekeeper proxy:
http://host/service/logging/kibana (gets redirected to) http://host/oauth/authorize?state=4dbde9e7-674c-4593-83f2-a8e5ba7cf6b5
and the Gatekeeper log:
1.601810963344485e+09 error no session found in request, redirecting for authorization {"error": "authentication session not found"}
I've been struggling with this for some time now and seems to be stuck! If anybody here "knows what's going on" I'd be very grateful.