App Mesh doesn't inject Envoy proxy to pod - amazon-eks

I'm integrating app mesh to microservices on EKS. I have deployed the microservices and app mesh controller and labelled my workload namespace with the sidecar injector webhook, but the app mesh controller throws a reconcile error.
I have been working on this for a while, and I haven't figured why I get the error. I will appreciate help from anyone who has implemented this successfully.
Errors I'm getting.
"level":"error","ts":1633349081.3646858,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"virtualnode","request":"myapp/oidctokenconverter","error":"failed to resolve virtualServiceRef: unable to fetch virtualService: myapp/identity-oidctokenconverter: VirtualService.appmesh.k8s.aws \"oidctokenconverter\" not found","errorVerbose":"VirtualService.appmesh.k8s.aws \"oidctokenconverter\" not found\nunable to fetch virtualService
{{- if .Values.global.appMesh.enabled -}}
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: {{ template "myapp.fullname" . }}
namespace: myapp
labels:
{{ include "myapp.match-labels" . | indent 4 }}
{{ include "myapp.pod-extra-labels" . | indent 4 }}
spec:
awsName: {{ template "myapp.fullname" . }}
meshName: myapp-mesh
podSelector:
matchLabels:
app: {{ template "myapp.name" . }}
release: {{ .Release.Name }}
listeners:
- portMapping:
port: 8080
protocol: http
backends:
- virtualService:
virtualServiceRef:
name: {{ template "myapp.fullname" . }}
namespace: myapp
serviceDiscovery:
dns:
hostname: {{ template "myapp.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
{{- end }}
{{- if .Values.global.appMesh.enabled -}}
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: {{ template "myapp.fullname" . }}
namespace: myapp
labels:
{{ include "myapp.match-labels" . | indent 4 }}
{{ include "myapp.pod-extra-labels" . | indent 4 }}
spec:
meshName: myapp-mesh
awsName: {{ template "myapp.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
provider:
virtualNode:
virtualNodeRef:
name: {{ template "myapp.fullname" . }}
{{- end }}

Related

Filebeat autodiscover doesn't work with nomad

I'm having some trouble to configure autodiscover to run with nomad, to get allocation logs from the nomad agent and send to logstash. Everytime i got this message in filebeat logs.
"message":"Non-zero metrics in the last 30s"
but in nomad a have an job that is generating logs every 60 seconds.
This is my filebeat config:
#### debug mode
loggin.info: debug
#### logstash output
output.logstash:
hosts: [ "xxx.xxx.xxx.xxx:5055"]
slow_start: true
### autodicover
filebeat.autodiscover:
providers:
- type: nomad
scope: cluster
hints.enabled: true
allow_stale: true
templates:
config:
- type: log
paths:
- /opt/nomad/alloc/${data.nomad.allocation.id}/alloc/logs/*
This is the filebeat logs:
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.049Z","log.origin":{"file.name":"instance/beat.go","file.line":293},"message":"Setup Beat: filebeat; Version: 8.5.0","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.054Z","log.logger":"publisher","log.origin":{"file.name":"pipeline/module.go","file.line":113},"message":"Beat name: ip-10-202-9-107.ec2.internal","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.055Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":120},"message":"Enabled modules/filesets: ","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","#timestamp":"2022-11-10T19:35:15.056Z","log.origin":{"file.name":"beater/filebeat.go","file.line":162},"message":"Filebeat is unable to load the ingest pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the ingest pipelines or are using Logstash pipelines, you can ignore this warning.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.057Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":144},"message":"Starting metrics logging every 30s","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.058Z","log.origin":{"file.name":"instance/beat.go","file.line":470},"message":"filebeat start running.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.059Z","log.origin":{"file.name":"memlog/store.go","file.line":134},"message":"Finished loading transaction log file for '/var/lib/filebeat/registry/filebeat'. Active transaction id=0","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","#timestamp":"2022-11-10T19:35:15.059Z","log.origin":{"file.name":"beater/filebeat.go","file.line":288},"message":"Filebeat is unable to load the ingest pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the ingest pipelines or are using Logstash pipelines, you can ignore this warning.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.060Z","log.logger":"registrar","log.origin":{"file.name":"registrar/registrar.go","file.line":109},"message":"States Loaded from registrar: 0","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.061Z","log.logger":"crawler","log.origin":{"file.name":"beater/crawler.go","file.line":71},"message":"Loading Inputs: 0","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.061Z","log.logger":"crawler","log.origin":{"file.name":"beater/crawler.go","file.line":106},"message":"Loading and starting Inputs completed. Enabled inputs: 0","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"warn","#timestamp":"2022-11-10T19:35:15.062Z","log.logger":"cfgwarn","log.origin":{"file.name":"nomad/nomad.go","file.line":58},"message":"EXPERIMENTAL: The nomad autodiscover provider is experimental.","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.063Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":120},"message":"Enabled modules/filesets: ","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.064Z","log.logger":"autodiscover","log.origin":{"file.name":"autodiscover/autodiscover.go","file.line":118},"message":"Starting autodiscover manager","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:15.696Z","log.logger":"nomad","log.origin":{"file.name":"nomad/watcher.go","file.line":192},"message":"Watching API for resource events","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","#timestamp":"2022-11-10T19:35:45.066Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000}},"id":"filebeat.service"},"cpuacct":{"id":"filebeat.service","total":{"ns":650388214}},"memory":{"id":"filebeat.service","mem":{"limit":{"bytes":9223372036854771712},"usage":{"bytes":33452032}}}},"cpu":{"system":{"ticks":70,"time":{"ms":70}},"total":{"ticks":640,"time":{"ms":640},"value":640},"user":{"ticks":570,"time":{"ms":570}}},"handles":{"limit":{"hard":65535,"soft":65535},"open":10},"info":{"ephemeral_id":"6aa6ed52-c270-4b8e-8f35-48feb7d77ebd","name":"filebeat","uptime":{"ms":30155},"version":"8.5.0"},"memstats":{"gc_next":21512168,"memory_alloc":15031152,"memory_sys":28918792,"memory_total":69080704,"rss":114196480},"runtime":{"goroutines":23}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0},"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":2},"load":{"1":0.09,"15":0.67,"5":0.38,"norm":{"1":0.045,"15":0.335,"5":0.19}}}},"ecs.version":"1.6.0"}}
This is my nomad job:
job "template" {
datacenters = ["us-east-1"]
type = "batch"
periodic {
cron = "* * * * *"
}
group "group" {
count = 1
task "command" {
driver = "exec"
config {
command = "bash"
args = ["-c", "cat local/template.out"]
}
template {
data = <<EOH
node.unique.id: {{ env "node.unique.id" }}
node.datacenter: {{ env "node.datacenter" }}
node.unique.name: {{ env "node.unique.name" }}
node.class: {{ env "node.class" }}
attr.cpu.arch: {{ env "attr.cpu.arch" }}
attr.cpu.numcores: {{ env "attr.cpu.numcores" }}
attr.cpu.totalcompute: {{ env "attr.cpu.totalcompute" }}
attr.consul.datacenter: {{ env "attr.consul.datacenter" }}
attr.unique.hostname: {{ env "attr.unique.hostname" }}
attr.unique.network.ip-address: {{ env "attr.unique.network.ip-address" }}
attr.kernel.name: {{ env "attr.kernel.name" }}
attr.kernel.version: {{ env "attr.kernel.version" }}
attr.platform.aws.ami-id: {{ env "attr.platform.aws.ami-id" }}
attr.platform.aws.instance-type: {{ env "attr.platform.aws.instance-type" }}
attr.os.name: {{ env "attr.os.name" }}
attr.os.version: {{ env "attr.os.version" }}
NOMAD_ALLOC_DIR: {{env "NOMAD_ALLOC_DIR"}}
NOMAD_TASK_DIR: {{env "NOMAD_TASK_DIR"}}
NOMAD_SECRETS_DIR: {{env "NOMAD_SECRETS_DIR"}}
NOMAD_MEMORY_LIMIT: {{env "NOMAD_MEMORY_LIMIT"}}
NOMAD_CPU_LIMIT: {{env "NOMAD_CPU_LIMIT"}}
NOMAD_ALLOC_ID: {{env "NOMAD_ALLOC_ID"}}
NOMAD_ALLOC_NAME: {{env "NOMAD_ALLOC_NAME"}}
NOMAD_ALLOC_INDEX: {{env "NOMAD_ALLOC_INDEX"}}
NOMAD_TASK_NAME: {{env "NOMAD_TASK_NAME"}}
NOMAD_GROUP_NAME: {{env "NOMAD_GROUP_NAME"}}
NOMAD_JOB_NAME: {{env "NOMAD_JOB_NAME"}}
NOMAD_DC: {{env "NOMAD_DC"}}
NOMAD_REGION: {{env "NOMAD_REGION"}}
VAULT_TOKEN: {{env "VAULT_TOKEN"}}
GOMAXPROCS: {{env "GOMAXPROCS"}}
HOME: {{env "HOME"}}
LANG: {{env "LANG"}}
LOGNAME: {{env "LOGNAME"}}
NOMAD_ADDR_export: {{env "NOMAD_ADDR_export"}}
NOMAD_ADDR_exstat: {{env "NOMAD_ADDR_exstat"}}
NOMAD_ALLOC_DIR: {{env "NOMAD_ALLOC_DIR"}}
NOMAD_ALLOC_ID: {{env "NOMAD_ALLOC_ID"}}
NOMAD_ALLOC_INDEX: {{env "NOMAD_ALLOC_INDEX"}}
NOMAD_ALLOC_NAME: {{env "NOMAD_ALLOC_NAME"}}
NOMAD_CPU_LIMIT: {{env "NOMAD_CPU_LIMIT"}}
NOMAD_DC: {{env "NOMAD_DC"}}
NOMAD_GROUP_NAME: {{env "NOMAD_GROUP_NAME"}}
NOMAD_HOST_PORT_export: {{env "NOMAD_HOST_PORT_export"}}
NOMAD_HOST_PORT_exstat: {{env "NOMAD_HOST_PORT_exstat"}}
NOMAD_IP_export: {{env "NOMAD_IP_export"}}
NOMAD_IP_exstat: {{env "NOMAD_IP_exstat"}}
NOMAD_JOB_NAME: {{env "NOMAD_JOB_NAME"}}
NOMAD_MEMORY_LIMIT: {{env "NOMAD_MEMORY_LIMIT"}}
NOMAD_PORT_export: {{env "NOMAD_PORT_export"}}
NOMAD_PORT_exstat: {{env "NOMAD_PORT_exstat"}}
NOMAD_REGION: {{env "NOMAD_REGION"}}
NOMAD_SECRETS_DIR: {{env "NOMAD_SECRETS_DIR"}}
NOMAD_TASK_DIR: {{env "NOMAD_TASK_DIR"}}
NOMAD_TASK_NAME: {{env "NOMAD_TASK_NAME"}}
PATH: {{env "PATH"}}
PWD: {{env "PWD"}}
SHELL: {{env "SHELL"}}
SHLVL: {{env "SHLVL"}}
USER: {{env "USER"}}
VAULT_TOKEN: {{env "VAULT_TOKEN"}}
concat key: service/fabio/{{ env "NOMAD_JOB_NAME" }}/listeners
key: {{ keyOrDefault ( printf "service/fabio/%s/listeners" ( env "NOMAD_JOB_NAME" ) ) ":9999" }} {{ define "custom" }}service/fabio/{{env "NOMAD_JOB_NAME" }}/listeners{{ end }}
key: {{ keyOrDefault (executeTemplate "custom") ":9999" }} math - alloc_id + 1: {{env "NOMAD_ALLOC_INDEX" | parseInt | add 1}}
EOH
destination = "local/template.out"
}
}
}
}
Can you help me understand why autodiscover doesn't work with nomad?

Bitnami ASP.NET Core Helm chart does not handle ingress extraHosts correctly

The bitnami ASP.NET Core helm chart in version 3.1.18 are not correct. If configuring extraHosts the ingress configuration is incorrect
Input:
[...]
ingress:
enabled: true
selfSigned: false
ingressClassName: nginx
hostname: apigateway.hcv-cluster-local.net
servicePort: http
tls: true
selfsigned: false
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
#certmanager.k8s.io/issuer: "letsencrypt-prod"
#certmanager.k8s.io/acme-challenge-type: http01
pathType: ImplementationSpecific
extraHosts:
- name: apigateway.hcv-cluster-local.net
path: /api
extraTls:
- hosts:
- apigateway.hcv-cluster-local.net
secretName: apigateway.local-tls
secrets:
- name: apigateway.local-tls
certificate: |-
---- BEGIN CERT...
key: |-
---- BEGIN RSA...
The "helm template" than fails with 'nil pointer evaluating interface {}' on extraPaths. Furthermore the ingress uses old names for the service link. Therefore the ingress.yaml in asp.net need to be adjusted. Seems to be a bug in the bitnami asp.net core chart:
Corrected version:
{{- if .Values.ingress.enabled -}}
apiVersion: {{ include "common.capabilities.ingress.apiVersion" . }}
kind: Ingress
metadata:
name: {{ include "aspnet-core.fullname" . }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
annotations:
{{- if .Values.ingress.certManager }}
kubernetes.io/tls-acme: "true"
{{- end }}
{{- if .Values.ingress.annotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.ingress.annotations "context" $) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.ingressClassName (include "common.ingress.supportsIngressClassname" .) }}
ingressClassName: {{ .Values.ingress.ingressClassName | quote }}
{{- end }}
rules:
{{- if .Values.ingress.hostname }}
- host: {{ .Values.ingress.hostname }}
http:
paths:
### HCV Start
{{- if .Values.ingress.extraPaths }}
{{- toYaml .Values.ingress.extraPaths | nindent 10 }}
{{- end }}
### HCV End
- path: {{ .Values.ingress.path }}
{{- if eq "true" (include "common.ingress.supportsPathType" .) }}
pathType: {{ .Values.ingress.pathType }}
{{- end }}
backend: {{- include "common.ingress.backend" (dict "serviceName" (include "aspnet-core.fullname" .) "servicePort" "http" "context" $) | nindent 14 }}
{{- end }}
{{- range .Values.ingress.extraHosts }}
- host: {{ .name }}
http:
paths:
### HCV Start remove extrapaths
### HCV End
### HCV Start
- path: {{ default "/" .path }}
{{- if eq "true" (include "common.ingress.supportsPathType" $) }}
pathType: {{ default "ImplementationSpecific" .pathType }}
{{- end }}
backend: {{- include "common.ingress.backend" (dict "serviceName" (include "aspnet-core.fullname" $) "servicePort" "http" "context" $) | nindent 14 }}
### HCV End
{{- end }}
{{- if or .Values.ingress.tls .Values.ingress.extraTls .Values.ingress.hosts }}
tls:
{{- if .Values.ingress.tls }}
- hosts:
- {{ .Values.ingress.hostname }}
secretName: {{ printf "%s-tls" .Values.ingress.hostname }}
{{- end }}
{{- if .Values.ingress.extraTls }}
{{- toYaml .Values.ingress.extraTls | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}
I loaded an extracted chart in my service, but I think it need to be corrected... Maybe someone could verify...

How to extract values from Ansible gather facts and manipulate it?

I want to extract some values from gather facts {{ ansible_default_ipv4 }}
In order to do so I ran ansible -i hosts all -m setup -a filter=ansible_default_ipv4
Then I get the output as:
"ansible_facts": {
"ansible_default_ipv4": {
"address": "10.6.97.221",
"alias": "bond0",
"broadcast": "10.6.97.255",
"gateway": "10.6.97.1",
"interface": "bond0",
"macaddress": "e8:39:35:c0:38:a4",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "10.6.97.0",
"type": "ether"
},
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false
I want to extract the values from address + netmask + gateway and print them in that value to a file.
How can I achieve that?
I managed to achieve it by
- name: echo
shell: echo "{{ ansible_hostname }} {{ ansible_default_ipv4.macaddress }} {{ ansible_default_ipv4.address }} {{ ansible_default_ipv4.netmask}} {{ ansible_default_ipv4.gateway}} {{ SERVER_ILO.stdout }}" >> /tmp/log.txt

Using variables in the Jinja2 template in ansible Playbook

Any idea of how we can use dynamic variables inside the Jinja2 Template. Below is the data from my Jinja2 template.
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/DATA,/dev/oracleasm/disks/ARCH,/dev/oracleasm/disks/OCR
The variable values in the defaults/main.yml is:
asm_disk_detail:
- { name: 'OCR', path: '/dev/sde1' }
- { name: 'DATA', path: '/dev/sdf1' }
- { name: 'ARCH', path: '/dev/sdg1' }
I am trying to use these variable values to pass dynamically at the time of running the playbook. These values should automatically get populated in the template.
Yes, this is possible. The main.yml will be sourced automatically when the ansible role is invoked. You just have to write a jinja2 template file for the same.
For example the below file:
A better representation of the main.yml file would be
---
asm_disk_detail:
- name: OCR
path: "/dev/sde1"
- name: DATA
path: "/dev/sdf1"
- name: ARCH
path: "/dev/sdg1"
jinja2 template: supports for loop so you can apply with_items on the above variable asm_disk_detail and create a config file as needed.
Note:- Please try the jinja2 file creation from your side in case any issues please shout :)
===============Play and jinja2 template
playbook-->
---
- name: test
hosts: localhost
tasks:
- name: test
include_vars: vars.yml
- name: jinja2
template:
src: template/template.yml
dest: target/target.yml
jinja2-->
{%- for item in asm_disk_detail -%}
{%- if not loop.last -%}
{{ item.path }}/{{ item.name }},
{%- else -%}
{{ item.path }}/{{ item.name }}
{%- endif -%}
{%- endfor -%}
output-->
oracle.install.asm.diskGroup.disks=/dev/sde1/OCR,/dev/sdf1/DATA,/dev/sdg1/ARCH
Use Ansible template module with a For loop in your template.
{% for disk in asm_disk_detail %}
disk name: {{ disk.name}}
disk path: {{ disk.path }}
{% endfor %}

How to exhange a full route? Router-link exchanges only the last part of the route

I am new to vue and vue router and am using Vue router in history mode. The app contains a menu which is dynamically loaded
<router-link
tag="li"
class="link"
v-for="item in menu"
v-bind:key="item.id"
:to="item.slug"
:exact="item.slug === '/' ? true : false">{{ item.content }}
</router-link>
It works well as long as I stay in the parent routes like http://localhost:8080/posts As soon as I navigate a level deeper, e.g. to a post with the id 8 http://localhost:8080/posts/8 with a routerlink inside the Template
<router-link
tag="h2"
class="link"
:to="{ name: 'post', params: { id: post.id }}">
{{ post.title.rendered }}
</router-link>
it works in one way, but doesnt go back to the parent route when I click the main navigation links. Instead just adds the link of the main menu to the end of the route, e.g. instead of http://localhost:8080/posts
http://localhost:8080/posts/posts
The router
const router = new Router({
mode: 'history',
base: '',
routes: [
{ path: '/', name: 'home', component: HomePage },
{ path: '/posts', name: 'posts', component: PostsPage },
{ path: '/posts/:id', name: 'post', component: SinglePost },
{ path: '/projects', name: 'projects', component: ProjectsPage },
{ path: '/projects/:id', name: 'project', component: ProjectPage },
{ path: '/:page', name: 'page', component: SinglePage },
{ path: "*", redirect: '/' },
],
// etc..
});
I guess I am making a common mistake, but I canĀ“t find a solution. Any help appreciated.
You can use absolute paths. Instead of
<router-link
tag="h2"
class="link"
:to="{ name: 'post', params: { id: post.id }}">
{{ post.title.rendered }}
</router-link>
use
<router-link
tag="h2"
class="link"
:to="{ name: '/post', params: { id: post.id }}">
{{ post.title.rendered }}
</router-link>
The change is /post vs post in router-link's :to attribute.
You could and probably should use nested routes and components for posts posts/:id, but this a little more work and you my not need it at the moment. More on nested routes here.