How to set Hash Key value in envoy proxy for RING_HASH load balancing - load-balancing

I am trying to set up RING_HASH load balancing on the envoy proxy based on some request header. From documentation it looks like I have to set hash key in filter_metadata
filter_metadata:
envoy.lb:
hash_key: "YOUR HASH KEY"
But, I am not able to find out what are the possible values/expressions of hash_key. I tried to configure a fixed value for the hash key, but still requests does not go to the same upstream server.
My envoy.yaml
admin:
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 10000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: some_service }
http_filters:
- name: envoy.filters.http.router
clusters:
- name: some_service
connect_timeout: 0.25s
type: STATIC
lb_policy: RING_HASH
metadata:
filter_metadata:
envoy.lb:
hash_key: "fixed_value"
load_assignment:
cluster_name: some_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.19.0.2
port_value: 8080
- endpoint:
address:
socket_address:
address: 172.19.0.3
port_value: 8080
- endpoint:
address:
socket_address:
address: 172.19.0.4
port_value: 8080

I guess, envoy.lb metadata is used to find hash of upstream host not to configure key for request.
There is another configuration, hash_policy which should be used.
My final working envoy configuration looks like this. The sticky session is based on header sticky-key
admin:
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 10000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: some_service
hash_policy:
- header:
header_name: sticky-key
http_filters:
- name: envoy.filters.http.router
clusters:
- name: some_service
connect_timeout: 0.25s
type: STATIC
lb_policy: RING_HASH
load_assignment:
cluster_name: some_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.19.0.2
port_value: 8080
- endpoint:
address:
socket_address:
address: 172.19.0.3
port_value: 8080
- endpoint:
address:
socket_address:
address: 172.19.0.4
port_value: 8080

Related

Swagger Parser error duplicated mapping key

Hi friends, I have an error during yaml API testing in Swagger editor
I have duplicate mapping key error in line no 98
I try to force execute the testing I have :
Failed to fetch.
Possible Reasons:
CORS
Network Failure
URL scheme must be "http" or "https" for CORS request. ----- this error please help me!!..
openapi: 3.0.0
info:
title: 06-jobs-api
contact: {}
version: '1.0'
servers:
- url: https://new-jobs-api.herokuapp.com/api/v1
variables: {}
paths:
/auth/register:
post:
tags:
- Auth
summary: register
operationId: register
parameters: []
requestBody:
description: ''
content:
application/json:
schema:
$ref: '#/components/schemas/registerrequest'
example:
name: josh
email: josh#gmail.com
password: joshgmail.com
required: true
responses:
'200':
description: ''
headers: {}
deprecated: false
security: []
/auth/login:
post:
tags:
- Auth
summary: login
operationId: login
parameters: []
requestBody:
description: ''
content:
application/json:
schema:
$ref: '#/components/schemas/loginrequest'
example:
email: john#gmail.com
password: john#gmail.com
required: true
responses:
'200':
description: ''
headers: {}
deprecated: false
security: []
/jobs:
post:
tags:
- Jobs
summary: create job
operationId: createjob
parameters: []
requestBody:
description: ''
content:
application/json:
schema:
$ref: '#/components/schemas/createjobrequest'
example:
company: mongodb
position: back-end developer
required: true
responses:
'200':
description: ''
headers: {}
deprecated: false
get:
tags:
- Jobs
summary: get all jobs
operationId: getalljobs
parameters: []
responses:
'200':
description: ''
headers: {}
deprecated: false
/jobs/{id}:
parameters:
- in: path
name: id
schema:
type: string
required: true
description: The user ID
get: ----- here I have an error line no 98
tags:
- Jobs
summary: get single job
operationId: getsinglejob
parameters: []
responses:
'200':
description: ''
headers: {}
deprecated: false
patch:
tags:
- Jobs
summary: update job
operationId: updatejob
parameters: []
requestBody:
description: ''
content:
application/json:
schema:
$ref: '#/components/schemas/updatejobrequest'
example:
company: prime
position: front-end developer
required: true
responses:
'200':
description: ''
headers: {}
deprecated: false
delete:
tags:
- Jobs
summary: delete job
operationId: deletejob
parameters: []
responses:
'200':
description: ''
headers: {}
deprecated: false
components:
schemas:
registerrequest:
title: registerrequest
required:
- name
- email
- password
type: object
properties:
name:
type: string
email:
type: string
password:
type: string
example:
name: josh
email: josh#gmail.com
password: joshgmail.com
createjobrequest:
title: createjobrequest
required:
- company
- position
type: object
properties:
company:
type: string
position:
type: string
example:
company: mongodb
position: back-end developer
updatejobrequest:
title: updatejobrequest
required:
- company
- position
type: object
properties:
company:
type: string
position:
type: string
example:
company: prime
position: front-end developer
loginrequest:
title: loginrequest
required:
- email
- password
type: object
properties:
email:
type: string
password:
type: string
example:
email: john#gmail.com
password: john#gmail.com
securitySchemes:
httpBearer:
type: http
scheme: bearer
security:
- httpBearer: []
tags:
- name: Misc
description: ''
- name: Auth
description: ''
- name: Jobs
description: ''
You need to unindent /jobs/{id} in line 90 by two spaces.

Adding ServiceMonitor to existing Express microservice not being registered in Prometheus

I'm trying to setup monitoring for my microservices and I've created a simple microservice just to try out the Prometheus monitoring. But I'm running into issues of prometheus not registering the service.
I'm deploying an example service like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pinger-svc
spec:
replicas: 3
selector:
matchLabels:
app: pinger-svc
template:
metadata:
labels:
app: pinger-svc
spec:
restartPolicy: Always
containers:
- name: pinger-svc
image: pingerservice:latest
ports:
- containerPort: 3000
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: pinger-svc
labels:
app: pinger-svc
spec:
selector:
app: pinger-svc
type: NodePort
ports:
- name: web
port: 80
targetPort: 3000
nodePort: 32333
And trying to setup a service monitor with:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: prometheus-kube-prometheus-pinger-service
spec:
selector:
matchLabels:
app: pinger-svc
endpoints:
- port: web
path: /metrics
interval: "10s"
In the app I'm also exposing the metrics in the app at endpoint /metrics:
const promBundle = require('express-prom-bundle');
const metricsMiddleware = promBundle({includeMethod: true});
app.use(metricsMiddleware);
This is all I can see under the targets at the Prometheus endpoint:
Here's my prometheus-operator pod config:
apiVersion: v1
kind: Pod
metadata:
name: prometheus-kube-prometheus-operator-cb55c97b9-4rh4f
generateName: prometheus-kube-prometheus-operator-cb55c97b9-
namespace: default
uid: c3c710b9-868f-41d9-b35e-7d55a212dd6f
resourceVersion: '176924'
creationTimestamp: '2021-09-16T13:34:16Z'
labels:
app: kube-prometheus-stack-operator
app.kubernetes.io/instance: prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: kube-prometheus-stack
app.kubernetes.io/version: 18.0.8
chart: kube-prometheus-stack-18.0.8
heritage: Helm
pod-template-hash: cb55c97b9
release: prometheus
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: prometheus-kube-prometheus-operator-cb55c97b9
uid: 62dbdb1e-ed3f-4f6f-a4e6-6da48d084681
controller: true
blockOwnerDeletion: true
managedFields:
- manager: k3s
operation: Update
apiVersion: v1
time: '2021-09-16T13:34:47Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:generateName': {}
'f:labels':
.: {}
'f:app': {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/part-of': {}
'f:app.kubernetes.io/version': {}
'f:chart': {}
'f:heritage': {}
'f:pod-template-hash': {}
'f:release': {}
'f:ownerReferences':
.: {}
'k:{"uid":"62dbdb1e-ed3f-4f6f-a4e6-6da48d084681"}':
.: {}
'f:apiVersion': {}
'f:blockOwnerDeletion': {}
'f:controller': {}
'f:kind': {}
'f:name': {}
'f:uid': {}
'f:spec':
'f:containers':
'k:{"name":"kube-prometheus-stack"}':
.: {}
'f:args': {}
'f:image': {}
'f:imagePullPolicy': {}
'f:name': {}
'f:ports':
.: {}
'k:{"containerPort":10250,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:name': {}
'f:protocol': {}
'f:resources': {}
'f:securityContext':
.: {}
'f:allowPrivilegeEscalation': {}
'f:readOnlyRootFilesystem': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:volumeMounts':
.: {}
'k:{"mountPath":"/cert"}':
.: {}
'f:mountPath': {}
'f:name': {}
'f:readOnly': {}
'f:dnsPolicy': {}
'f:enableServiceLinks': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext':
.: {}
'f:fsGroup': {}
'f:runAsGroup': {}
'f:runAsNonRoot': {}
'f:runAsUser': {}
'f:serviceAccount': {}
'f:serviceAccountName': {}
'f:terminationGracePeriodSeconds': {}
'f:volumes':
.: {}
'k:{"name":"tls-secret"}':
.: {}
'f:name': {}
'f:secret':
.: {}
'f:defaultMode': {}
'f:secretName': {}
'f:status':
'f:conditions':
'k:{"type":"ContainersReady"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:status': {}
'f:type': {}
'k:{"type":"Initialized"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:status': {}
'f:type': {}
'k:{"type":"Ready"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:status': {}
'f:type': {}
'f:containerStatuses': {}
'f:hostIP': {}
'f:phase': {}
'f:podIP': {}
'f:podIPs':
.: {}
'k:{"ip":"10.42.0.53"}':
.: {}
'f:ip': {}
'f:startTime': {}
selfLink: >-
/api/v1/namespaces/default/pods/prometheus-kube-prometheus-operator-cb55c97b9-4rh4f
status:
phase: Running
conditions:
- type: Initialized
status: 'True'
lastProbeTime: null
lastTransitionTime: '2021-09-16T13:34:16Z'
- type: Ready
status: 'True'
lastProbeTime: null
lastTransitionTime: '2021-09-16T13:34:46Z'
- type: ContainersReady
status: 'True'
lastProbeTime: null
lastTransitionTime: '2021-09-16T13:34:46Z'
- type: PodScheduled
status: 'True'
lastProbeTime: null
lastTransitionTime: '2021-09-16T13:34:16Z'
hostIP: 192.168.50.85
podIP: 10.42.0.53
podIPs:
- ip: 10.42.0.53
startTime: '2021-09-16T13:34:16Z'
containerStatuses:
- name: kube-prometheus-stack
state:
running:
startedAt: '2021-09-16T13:34:46Z'
lastState: {}
ready: true
restartCount: 0
image: 'quay.io/prometheus-operator/prometheus-operator:v0.50.0'
imageID: >-
quay.io/prometheus-operator/prometheus-operator#sha256:ab4f480f2cc65e98f1b4dfb93eb3a41410036359c238fdd60bb3f59deca8d522
containerID: >-
containerd://4f03044b1013f18f918034e96950908ee3fc31f8de901a740a8594b273b957bc
started: true
qosClass: BestEffort
spec:
volumes:
- name: tls-secret
secret:
secretName: prometheus-kube-prometheus-admission
defaultMode: 420
- name: kube-api-access-6rbnp
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
defaultMode: 420
containers:
- name: kube-prometheus-stack
image: 'quay.io/prometheus-operator/prometheus-operator:v0.50.0'
args:
- '--kubelet-service=kube-system/prometheus-kube-prometheus-kubelet'
- '--localhost=127.0.0.1'
- >-
--prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.50.0
- '--config-reloader-cpu-request=100m'
- '--config-reloader-cpu-limit=100m'
- '--config-reloader-memory-request=50Mi'
- '--config-reloader-memory-limit=50Mi'
- '--thanos-default-base-image=quay.io/thanos/thanos:v0.17.2'
- '--web.enable-tls=true'
- '--web.cert-file=/cert/cert'
- '--web.key-file=/cert/key'
- '--web.listen-address=:10250'
- '--web.tls-min-version=VersionTLS13'
ports:
- name: https
containerPort: 10250
protocol: TCP
resources: {}
volumeMounts:
- name: tls-secret
readOnly: true
mountPath: /cert
- name: kube-api-access-6rbnp
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: prometheus-kube-prometheus-operator
serviceAccount: prometheus-kube-prometheus-operator
nodeName: ubuntu
securityContext:
runAsUser: 65534
runAsGroup: 65534
runAsNonRoot: true
fsGroup: 65534
schedulerName: default-scheduler
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
priority: 0
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority

ISTIO Ingress Gateway logs

We have set up Istio, and we are using ISTIO ingress gateway for inbound traffic. We have set up TLS for TCP port. Sample code can be found here.
We also enabled logs by following this ISTIO guide
We tested the TLS connection using openssl and it works fine.
However, when we try to connect from an application, the TLS negotiation fails. I have provided more details with wireshark here
We would like to get logs from ISTIO on the TLS negotiation ... and find why it fails.
Istio Gateway YAML
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dremio-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- testdomain.net
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: testdomain-credentials
hosts:
- testdomain.net
- port:
number: 31020
name: odbc-dremio-tls
protocol: tls
tls:
mode: SIMPLE
minProtocolVersion: TLSV1_0
maxProtocolVersion: TLSV1_3
credentialName: testdomain-credentials
hosts:
- testdomain.net
Virtual Service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dremio
spec:
hosts:
- testdomain.net
gateways:
- dremio-gateway
http:
- match:
- port: 443
- port: 80
route:
- destination:
host: dremio-client
port:
number: 9047
tcp:
- match:
- port: 31020
route:
- destination:
host: dremio-client
port:
number: 31010
Partial Config Dump
{
"name": "0.0.0.0_31020",
"active_state": {
"version_info": "2020-07-21T12:11:49Z/9",
"listener": {
"#type": "type.googleapis.com/envoy.api.v2.Listener",
"name": "0.0.0.0_31020",
"address": {
"socket_address": {
"address": "0.0.0.0",
"port_value": 31020
}
},
"filter_chains": [
{
"filter_chain_match": {
"server_names": [
"testdomain.net"
]
},
"filters": [
{
"name": "istio.stats",
"typed_config": {
"#type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"type_url": "type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm",
"value": {
"config": {
"root_id": "stats_outbound",
"vm_config": {
"vm_id": "tcp_stats_outbound",
"runtime": "envoy.wasm.runtime.null",
"code": {
"local": {
"inline_string": "envoy.wasm.stats"
}
}
},
"configuration": "{\n \"debug\": \"false\",\n \"stat_prefix\": \"istio\"\n}\n"
}
}
}
},
{
"name": "envoy.tcp_proxy",
"typed_config": {
"#type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
"stat_prefix": "outbound|31010||dremio-client.dremio.svc.cluster.local",
"cluster": "outbound|31010||dremio-client.dremio.svc.cluster.local",
"access_log": [
{
"name": "envoy.file_access_log",
"typed_config": {
"#type": "type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog",
"path": "/dev/stdout",
"format": "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% \"%DYNAMIC_METADATA(istio.mixer:status)%\" \"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n"
}
}
]
}
}
],
"transport_socket": {
"name": "envoy.transport_sockets.tls",
"typed_config": {
"#type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
"common_tls_context": {
"tls_params": {
"tls_minimum_protocol_version": "TLSv1_0",
"tls_maximum_protocol_version": "TLSv1_3"
},
"alpn_protocols": [
"h2",
"http/1.1"
],
"tls_certificate_sds_secret_configs": [
{
"name": "testdomain-credentials",
"sds_config": {
"api_config_source": {
"api_type": "GRPC",
"grpc_services": [
{
"google_grpc": {
"target_uri": "unix:/var/run/ingress_gateway/sds",
"stat_prefix": "sdsstat"
}
}
]
}
}
}
]
},
"require_client_certificate": false
}
}
}
],
"listener_filters": [
{
"name": "envoy.listener.tls_inspector",
"typed_config": {
"#type": "type.googleapis.com/envoy.config.filter.listener.tls_inspector.v2.TlsInspector"
}
}
],
"traffic_direction": "OUTBOUND"
},
"last_updated": "2020-07-21T12:11:50.303Z"
}
}
By enabling tracing on Envoy conn_handler, We can see the following message:
closing connection: no matching filter chain found
After getting the message of no matching filter chain, I found the filter chain for the port 31020 with the domain that I have provided in my Gateway config. It looks like while connecting my application(ODBC), the host was not being provided.
The solution is simply to replace the host domain by '*'
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dremio-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- testdomain.net
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: testdomain-credentials
hosts:
- testdomain.net
- port:
number: 31020
name: odbc-dremio-tls
protocol: tls
tls:
mode: SIMPLE
minProtocolVersion: TLSV1_0
maxProtocolVersion: TLSV1_3
credentialName: testdomain-credentials
hosts:
- '*'

Istio virtual service header rules are not applied

So I have a very unique situation.
Problem
Virtual services route rules are not applied. We have a buzzfeed sso setup in our cluster. We wand to modify response headers to i.e Add header. to each request that matches the uri sign_in.
Buzzfeed sso has its own namespace.
Now To accomplish this I have created a virtual service.
Steps to Reproduce:
We used this virtual service spec to create the route rules.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sso-auth-injector
spec:
hosts:
- sso-auth
http:
- match:
- uri:
prefix: /sign_in
ignoreUriCase: true
route:
- destination:
host: sso-auth
headers:
response:
add:
foo: bar
request:
add:
hello: world
Analysis
Istioctk x describe has output
Pod: sso-auth-58744b56cd-lwqrh.sso
Pod Ports: 4180 (sso-auth), 15090 (istio-proxy)
Suggestion: add ‘app’ label to pod for Istio telemetry.
Suggestion: add ‘version’ label to pod for Istio telemetry.
Service: sso-auth.sso
Port: http 80/HTTP targets pod port 4180
Pod is PERMISSIVE (enforces HTTP/mTLS) and clients speak HTTP
VirtualService: sso-auth-injector.sso
/sign_in uncased
2) Istioctl . Not attaching all the rules but for outbound|80|
"routes": [
{
"match": {
"prefix": "/sign_in",
"caseSensitive": false
},
"route": {
"cluster": "outbound|80||sso-auth.sso.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking/v1alpha3/namespaces/sso/virtual-service/sso-auth-injector"
}
}
},
"decorator": {
"operation": "sso-auth.sso.svc.cluster.local:80/sign_in*"
},
"typedPerFilterConfig": {
"mixer": {
"#type": "type.googleapis.com/istio.mixer.v1.config.client.ServiceConfig",
"disableCheckCalls": true,
"mixerAttributes": {
"attributes": {
"destination.service.host": {
"stringValue": "sso-auth.sso.svc.cluster.local"
},
"destination.service.name": {
"stringValue": "sso-auth"
},
"destination.service.namespace": {
"stringValue": "sso"
},
"destination.service.uid": {
"stringValue": "istio://sso/services/sso-auth"
}
}
},
"forwardAttributes": {
"attributes": {
"destination.service.host": {
"stringValue": "sso-auth.sso.svc.cluster.local"
},
"destination.service.name": {
"stringValue": "sso-auth"
},
"destination.service.namespace": {
"stringValue": "sso"
},
"destination.service.uid": {
"stringValue": "istio://sso/services/sso-auth"
}
}
}
}
},
"requestHeadersToAdd": [
{
"header": {
"key": "hello",
"value": "world"
},
"append": true
}
],
"responseHeadersToAdd": [
{
"header": {
"key": "foo",
"value": "bar"
},
"append": true
}
]
}
]
},
Issues/Questions
These rules dont take affect. Each request is passed to the service but headers are not modified.
Shouldnt the route rules be applicable to inbound requests as opposed to outbound (as shown in config generated).
We want to modify response headers to i.e Add header. to each request that matches the uri sign_in
I made an example, tested it and everything works just fine.
Check below vs, tests and whole example.
Virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh
hosts:
- nginx.default.svc.cluster.local
http:
- name: match
headers:
response:
add:
foo: "bar"
match:
- uri:
prefix: /sign_in
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v1
Everything you need for test
apiVersion: v1
kind: Pod
metadata:
name: ubu1
spec:
containers:
- name: ubu1
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh
hosts:
- nginx.default.svc.cluster.local
http:
- name: match
headers:
response:
add:
foo: "bar"
match:
- uri:
prefix: /sign_in
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: v1
labels:
run: nginx1
Test from ubuntu pod
I used curl -I for displaying response headers
curl -I nginx/sign_in
HTTP/1.1 200 OK
server: envoy
date: Tue, 24 Mar 2020 07:44:10 GMT
content-type: text/html
content-length: 13
last-modified: Thu, 12 Mar 2020 06:52:43 GMT
etag: "5e69dc3b-d"
accept-ranges: bytes
x-envoy-upstream-service-time: 3
foo: bar
As you can see the foo:bar header is added correctly.
Additional links for headers
https://istiobyexample.dev/response-headers/
Istio adds and removed headers, but doesn't overwrite
How to display request headers with command line curl
In your istioctl analyze I see you might have an 503 error
"retriableStatusCodes": [
503
]
Additional links for 503 eror
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
Accessing service using istio ingress gives 503 error when mTLS is enabled

symfony LexikJWTAuthenticationBundle bad credential

I want to integrate LexikJWTAuthenticationBundle of symfony with fosUserBundle , I've follow the instructions here but always I receive 401 bad credentials error .
here is my config.yml file :
imports:
- { resource: parameters.yml }
- { resource: security.yml }
- { resource: services.yml }
# Put parameters here that don't need to change on each machine where the app is deployed
# https://symfony.com/doc/current/best_practices/configuration.html#application-related-configuration
parameters:
locale: en
framework:
#esi: ~
#translator: { fallbacks: ['%locale%'] }
translator: ~
secret: '%secret%'
router:
resource: '%kernel.project_dir%/app/config/routing.yml'
strict_requirements: ~
form: ~
csrf_protection: ~
validation: { enable_annotations: true }
#serializer: { enable_annotations: true }
templating:
engines: ['twig']
default_locale: '%locale%'
trusted_hosts: ~
session:
# https://symfony.com/doc/current/reference/configuration/framework.html#handler-id
handler_id: session.handler.native_file
save_path: '%kernel.project_dir%/var/sessions/%kernel.environment%'
fragments: ~
http_method_override: true
assets: ~
php_errors:
log: true
serializer:
enabled: true
# Twig Configuration
twig:
debug: '%kernel.debug%'
strict_variables: '%kernel.debug%'
# Doctrine Configuration
doctrine:
dbal:
driver: pdo_mysql
host: '%database_host%'
port: '%database_port%'
dbname: '%database_name%'
user: '%database_user%'
password: '%database_password%'
charset: UTF8
mapping_types:
enum: string
# if using pdo_sqlite as your database driver:
# 1. add the path in parameters.yml
# e.g. database_path: "%kernel.project_dir%/var/data/data.sqlite"
# 2. Uncomment database_path in parameters.yml.dist
# 3. Uncomment next line:
#path: '%database_path%'
orm:
auto_generate_proxy_classes: '%kernel.debug%'
naming_strategy: doctrine.orm.naming_strategy.underscore
auto_mapping: true
# Swiftmailer Configuration
swiftmailer:
transport: '%mailer_transport%'
host: '%mailer_host%'
username: '%mailer_user%'
password: '%mailer_password%'
spool: { type: memory }
fos_user:
db_driver: orm # other valid values are 'mongodb', 'couchdb' and 'propel'
firewall_name: main
user_class: AppBundle\Entity\Collaborator
from_email:
address: sahnoun.mabrouk#gmail.com
sender_name: sahnoun MABROUK
# Nelmio CORS Configuration
nelmio_cors:
defaults:
allow_credentials: false
allow_origin: ['*']
allow_headers: ['*']
allow_methods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS']
max_age: 3600
hosts: []
origin_regex: false
fos_rest:
serializer:
serialize_null: true
routing_loader:
include_format: false
view:
view_response_listener: true
format_listener:
rules:
- { path: '^/', priorities: ['json'], fallback_format: 'json' }
- { path: '^/login', priorities: ['html'], fallback_format: 'html' }
- { path: '^/register', priorities: ['html'], fallback_format: 'html' }
- { path: '^/resetting', priorities: ['html'], fallback_format: 'html' }
lexik_jwt_authentication:
private_key_path: '%jwt_private_key_path%'
public_key_path: '%jwt_public_key_path%'
pass_phrase: '%jwt_key_pass_phrase%'
token_ttl: '%jwt_token_ttl%'
security.yml :
# To get started with security, check out the documentation:
# https://symfony.com/doc/current/security.html
security:
# https://symfony.com/doc/current/security.html#b-configuring-how-users-are-loaded
providers:
in_memory:
memory: ~
fos_userbundle:
id: fos_user.user_provider.username
encoders:
FOS\UserBundle\Model\UserInterface: bcrypt
role_hierarchy:
ROLE_ADMIN: ROLE_USER
ROLE_SUPER_ADMIN: ROLE_ADMIN
firewalls:
login:
pattern: ^/api/login
stateless: true
anonymous: true
form_login:
check_path: /api/login_check
success_handler: lexik_jwt_authentication.handler.authentication_success
failure_handler: lexik_jwt_authentication.handler.authentication_failure
require_previous_session: false
api:
pattern: ^/api
stateless: true
guard:
authenticators:
- lexik_jwt_authentication.jwt_token_authenticator
# disables authentication for assets and the profiler, adapt it according to your needs
dev:
pattern: ^/(_(profiler|wdt)|css|images|js)/
security: false
main:
anonymous: ~
pattern: ^/
logout: true
form_login:
provider: fos_userbundle
csrf_token_generator: security.csrf.token_manager
logout: true
anonymous: true
# activate different ways to authenticate
# https://symfony.com/doc/current/security.html#a-configuring-how-your-users-will-authenticate
#http_basic: ~
# https://symfony.com/doc/current/security/form_login_setup.html
#form_login: ~
access_control:
- { path: ^/login$, role: IS_AUTHENTICATED_ANONYMOUSLY }
- { path: ^/register, role: IS_AUTHENTICATED_ANONYMOUSLY }
- { path: ^/resetting, role: IS_AUTHENTICATED_ANONYMOUSLY }
- { path: ^/admin/, role: ROLE_ADMIN }
- { path: ^/api/login, roles: IS_AUTHENTICATED_ANONYMOUSLY }
- { path: ^/api, roles: IS_AUTHENTICATED_FULLY }
and i've added api_login_check:
path: /api/login_check to the route file.
I've read all issues related to this error but nothing work for me.
can any one help me please
You are missing the respective providers for each route, take a look here.
security:
firewalls:
login:
...
provider: in_memory
...
api:
...
provider: jwt
...
problem solved ! just make fos_userbundle as first provider to check credentials from database
security:
providers:
fos_userbundle:
id: fos_user.user_provider.username
in_memory:
memory: ~
...