ISTIO Ingress Gateway logs - ssl

We have set up Istio, and we are using ISTIO ingress gateway for inbound traffic. We have set up TLS for TCP port. Sample code can be found here.
We also enabled logs by following this ISTIO guide
We tested the TLS connection using openssl and it works fine.
However, when we try to connect from an application, the TLS negotiation fails. I have provided more details with wireshark here
We would like to get logs from ISTIO on the TLS negotiation ... and find why it fails.
Istio Gateway YAML
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dremio-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- testdomain.net
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: testdomain-credentials
hosts:
- testdomain.net
- port:
number: 31020
name: odbc-dremio-tls
protocol: tls
tls:
mode: SIMPLE
minProtocolVersion: TLSV1_0
maxProtocolVersion: TLSV1_3
credentialName: testdomain-credentials
hosts:
- testdomain.net
Virtual Service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dremio
spec:
hosts:
- testdomain.net
gateways:
- dremio-gateway
http:
- match:
- port: 443
- port: 80
route:
- destination:
host: dremio-client
port:
number: 9047
tcp:
- match:
- port: 31020
route:
- destination:
host: dremio-client
port:
number: 31010
Partial Config Dump
{
"name": "0.0.0.0_31020",
"active_state": {
"version_info": "2020-07-21T12:11:49Z/9",
"listener": {
"#type": "type.googleapis.com/envoy.api.v2.Listener",
"name": "0.0.0.0_31020",
"address": {
"socket_address": {
"address": "0.0.0.0",
"port_value": 31020
}
},
"filter_chains": [
{
"filter_chain_match": {
"server_names": [
"testdomain.net"
]
},
"filters": [
{
"name": "istio.stats",
"typed_config": {
"#type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"type_url": "type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm",
"value": {
"config": {
"root_id": "stats_outbound",
"vm_config": {
"vm_id": "tcp_stats_outbound",
"runtime": "envoy.wasm.runtime.null",
"code": {
"local": {
"inline_string": "envoy.wasm.stats"
}
}
},
"configuration": "{\n \"debug\": \"false\",\n \"stat_prefix\": \"istio\"\n}\n"
}
}
}
},
{
"name": "envoy.tcp_proxy",
"typed_config": {
"#type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
"stat_prefix": "outbound|31010||dremio-client.dremio.svc.cluster.local",
"cluster": "outbound|31010||dremio-client.dremio.svc.cluster.local",
"access_log": [
{
"name": "envoy.file_access_log",
"typed_config": {
"#type": "type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog",
"path": "/dev/stdout",
"format": "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% \"%DYNAMIC_METADATA(istio.mixer:status)%\" \"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n"
}
}
]
}
}
],
"transport_socket": {
"name": "envoy.transport_sockets.tls",
"typed_config": {
"#type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
"common_tls_context": {
"tls_params": {
"tls_minimum_protocol_version": "TLSv1_0",
"tls_maximum_protocol_version": "TLSv1_3"
},
"alpn_protocols": [
"h2",
"http/1.1"
],
"tls_certificate_sds_secret_configs": [
{
"name": "testdomain-credentials",
"sds_config": {
"api_config_source": {
"api_type": "GRPC",
"grpc_services": [
{
"google_grpc": {
"target_uri": "unix:/var/run/ingress_gateway/sds",
"stat_prefix": "sdsstat"
}
}
]
}
}
}
]
},
"require_client_certificate": false
}
}
}
],
"listener_filters": [
{
"name": "envoy.listener.tls_inspector",
"typed_config": {
"#type": "type.googleapis.com/envoy.config.filter.listener.tls_inspector.v2.TlsInspector"
}
}
],
"traffic_direction": "OUTBOUND"
},
"last_updated": "2020-07-21T12:11:50.303Z"
}
}
By enabling tracing on Envoy conn_handler, We can see the following message:
closing connection: no matching filter chain found

After getting the message of no matching filter chain, I found the filter chain for the port 31020 with the domain that I have provided in my Gateway config. It looks like while connecting my application(ODBC), the host was not being provided.
The solution is simply to replace the host domain by '*'
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dremio-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
tls:
httpsRedirect: true
hosts:
- testdomain.net
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: testdomain-credentials
hosts:
- testdomain.net
- port:
number: 31020
name: odbc-dremio-tls
protocol: tls
tls:
mode: SIMPLE
minProtocolVersion: TLSV1_0
maxProtocolVersion: TLSV1_3
credentialName: testdomain-credentials
hosts:
- '*'

Related

Adding ServiceMonitor to existing Express microservice not being registered in Prometheus

I'm trying to setup monitoring for my microservices and I've created a simple microservice just to try out the Prometheus monitoring. But I'm running into issues of prometheus not registering the service.
I'm deploying an example service like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pinger-svc
spec:
replicas: 3
selector:
matchLabels:
app: pinger-svc
template:
metadata:
labels:
app: pinger-svc
spec:
restartPolicy: Always
containers:
- name: pinger-svc
image: pingerservice:latest
ports:
- containerPort: 3000
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: pinger-svc
labels:
app: pinger-svc
spec:
selector:
app: pinger-svc
type: NodePort
ports:
- name: web
port: 80
targetPort: 3000
nodePort: 32333
And trying to setup a service monitor with:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: prometheus-kube-prometheus-pinger-service
spec:
selector:
matchLabels:
app: pinger-svc
endpoints:
- port: web
path: /metrics
interval: "10s"
In the app I'm also exposing the metrics in the app at endpoint /metrics:
const promBundle = require('express-prom-bundle');
const metricsMiddleware = promBundle({includeMethod: true});
app.use(metricsMiddleware);
This is all I can see under the targets at the Prometheus endpoint:
Here's my prometheus-operator pod config:
apiVersion: v1
kind: Pod
metadata:
name: prometheus-kube-prometheus-operator-cb55c97b9-4rh4f
generateName: prometheus-kube-prometheus-operator-cb55c97b9-
namespace: default
uid: c3c710b9-868f-41d9-b35e-7d55a212dd6f
resourceVersion: '176924'
creationTimestamp: '2021-09-16T13:34:16Z'
labels:
app: kube-prometheus-stack-operator
app.kubernetes.io/instance: prometheus
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: kube-prometheus-stack
app.kubernetes.io/version: 18.0.8
chart: kube-prometheus-stack-18.0.8
heritage: Helm
pod-template-hash: cb55c97b9
release: prometheus
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: prometheus-kube-prometheus-operator-cb55c97b9
uid: 62dbdb1e-ed3f-4f6f-a4e6-6da48d084681
controller: true
blockOwnerDeletion: true
managedFields:
- manager: k3s
operation: Update
apiVersion: v1
time: '2021-09-16T13:34:47Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:generateName': {}
'f:labels':
.: {}
'f:app': {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/part-of': {}
'f:app.kubernetes.io/version': {}
'f:chart': {}
'f:heritage': {}
'f:pod-template-hash': {}
'f:release': {}
'f:ownerReferences':
.: {}
'k:{"uid":"62dbdb1e-ed3f-4f6f-a4e6-6da48d084681"}':
.: {}
'f:apiVersion': {}
'f:blockOwnerDeletion': {}
'f:controller': {}
'f:kind': {}
'f:name': {}
'f:uid': {}
'f:spec':
'f:containers':
'k:{"name":"kube-prometheus-stack"}':
.: {}
'f:args': {}
'f:image': {}
'f:imagePullPolicy': {}
'f:name': {}
'f:ports':
.: {}
'k:{"containerPort":10250,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:name': {}
'f:protocol': {}
'f:resources': {}
'f:securityContext':
.: {}
'f:allowPrivilegeEscalation': {}
'f:readOnlyRootFilesystem': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:volumeMounts':
.: {}
'k:{"mountPath":"/cert"}':
.: {}
'f:mountPath': {}
'f:name': {}
'f:readOnly': {}
'f:dnsPolicy': {}
'f:enableServiceLinks': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext':
.: {}
'f:fsGroup': {}
'f:runAsGroup': {}
'f:runAsNonRoot': {}
'f:runAsUser': {}
'f:serviceAccount': {}
'f:serviceAccountName': {}
'f:terminationGracePeriodSeconds': {}
'f:volumes':
.: {}
'k:{"name":"tls-secret"}':
.: {}
'f:name': {}
'f:secret':
.: {}
'f:defaultMode': {}
'f:secretName': {}
'f:status':
'f:conditions':
'k:{"type":"ContainersReady"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:status': {}
'f:type': {}
'k:{"type":"Initialized"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:status': {}
'f:type': {}
'k:{"type":"Ready"}':
.: {}
'f:lastProbeTime': {}
'f:lastTransitionTime': {}
'f:status': {}
'f:type': {}
'f:containerStatuses': {}
'f:hostIP': {}
'f:phase': {}
'f:podIP': {}
'f:podIPs':
.: {}
'k:{"ip":"10.42.0.53"}':
.: {}
'f:ip': {}
'f:startTime': {}
selfLink: >-
/api/v1/namespaces/default/pods/prometheus-kube-prometheus-operator-cb55c97b9-4rh4f
status:
phase: Running
conditions:
- type: Initialized
status: 'True'
lastProbeTime: null
lastTransitionTime: '2021-09-16T13:34:16Z'
- type: Ready
status: 'True'
lastProbeTime: null
lastTransitionTime: '2021-09-16T13:34:46Z'
- type: ContainersReady
status: 'True'
lastProbeTime: null
lastTransitionTime: '2021-09-16T13:34:46Z'
- type: PodScheduled
status: 'True'
lastProbeTime: null
lastTransitionTime: '2021-09-16T13:34:16Z'
hostIP: 192.168.50.85
podIP: 10.42.0.53
podIPs:
- ip: 10.42.0.53
startTime: '2021-09-16T13:34:16Z'
containerStatuses:
- name: kube-prometheus-stack
state:
running:
startedAt: '2021-09-16T13:34:46Z'
lastState: {}
ready: true
restartCount: 0
image: 'quay.io/prometheus-operator/prometheus-operator:v0.50.0'
imageID: >-
quay.io/prometheus-operator/prometheus-operator#sha256:ab4f480f2cc65e98f1b4dfb93eb3a41410036359c238fdd60bb3f59deca8d522
containerID: >-
containerd://4f03044b1013f18f918034e96950908ee3fc31f8de901a740a8594b273b957bc
started: true
qosClass: BestEffort
spec:
volumes:
- name: tls-secret
secret:
secretName: prometheus-kube-prometheus-admission
defaultMode: 420
- name: kube-api-access-6rbnp
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
defaultMode: 420
containers:
- name: kube-prometheus-stack
image: 'quay.io/prometheus-operator/prometheus-operator:v0.50.0'
args:
- '--kubelet-service=kube-system/prometheus-kube-prometheus-kubelet'
- '--localhost=127.0.0.1'
- >-
--prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.50.0
- '--config-reloader-cpu-request=100m'
- '--config-reloader-cpu-limit=100m'
- '--config-reloader-memory-request=50Mi'
- '--config-reloader-memory-limit=50Mi'
- '--thanos-default-base-image=quay.io/thanos/thanos:v0.17.2'
- '--web.enable-tls=true'
- '--web.cert-file=/cert/cert'
- '--web.key-file=/cert/key'
- '--web.listen-address=:10250'
- '--web.tls-min-version=VersionTLS13'
ports:
- name: https
containerPort: 10250
protocol: TCP
resources: {}
volumeMounts:
- name: tls-secret
readOnly: true
mountPath: /cert
- name: kube-api-access-6rbnp
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: prometheus-kube-prometheus-operator
serviceAccount: prometheus-kube-prometheus-operator
nodeName: ubuntu
securityContext:
runAsUser: 65534
runAsGroup: 65534
runAsNonRoot: true
fsGroup: 65534
schedulerName: default-scheduler
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
priority: 0
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority

How to set Hash Key value in envoy proxy for RING_HASH load balancing

I am trying to set up RING_HASH load balancing on the envoy proxy based on some request header. From documentation it looks like I have to set hash key in filter_metadata
filter_metadata:
envoy.lb:
hash_key: "YOUR HASH KEY"
But, I am not able to find out what are the possible values/expressions of hash_key. I tried to configure a fixed value for the hash key, but still requests does not go to the same upstream server.
My envoy.yaml
admin:
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 10000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: some_service }
http_filters:
- name: envoy.filters.http.router
clusters:
- name: some_service
connect_timeout: 0.25s
type: STATIC
lb_policy: RING_HASH
metadata:
filter_metadata:
envoy.lb:
hash_key: "fixed_value"
load_assignment:
cluster_name: some_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.19.0.2
port_value: 8080
- endpoint:
address:
socket_address:
address: 172.19.0.3
port_value: 8080
- endpoint:
address:
socket_address:
address: 172.19.0.4
port_value: 8080
I guess, envoy.lb metadata is used to find hash of upstream host not to configure key for request.
There is another configuration, hash_policy which should be used.
My final working envoy configuration looks like this. The sticky session is based on header sticky-key
admin:
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 10000 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: some_service
hash_policy:
- header:
header_name: sticky-key
http_filters:
- name: envoy.filters.http.router
clusters:
- name: some_service
connect_timeout: 0.25s
type: STATIC
lb_policy: RING_HASH
load_assignment:
cluster_name: some_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.19.0.2
port_value: 8080
- endpoint:
address:
socket_address:
address: 172.19.0.3
port_value: 8080
- endpoint:
address:
socket_address:
address: 172.19.0.4
port_value: 8080

How to deploy an Azure IoT Hub storage account archiving in bicep?

I try to use Azure Resource Manager and bicep to deploy an IoT Hub and a storage account. IoT Hub has the feature to store all messages in an storage account for archiving purpose. The IoT Hub should access the storage account with an User-assigned Managed Identity.
I would like to deploy all these things in a single ARM deployment written in bicep. The problem is deploying the IoT Hub with a User-assigned Identity and setting up the archive custom route. I get the error:
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code": "400140",
"message": "endpointName:messageArchive, exceptionMessage:Invalid operation: Managed identity is not enabled for IotHub ... errorcode: IH400140."
}
]
}
My bicep file looks like this
resource messageArchive 'Microsoft.Storage/storageAccounts#2021-04-01' = {
name: 'messagearchive4631'
location: resourceGroup().location
kind: 'StorageV2'
sku: {
name: 'Standard_GRS'
}
properties: {
accessTier: 'Hot'
supportsHttpsTrafficOnly: true
}
}
resource messageArchiveBlobService 'Microsoft.Storage/storageAccounts/blobServices#2021-04-01' = {
name: 'default'
parent: messageArchive
resource messageArchiveContainer 'containers#2021-02-01' = {
name: 'iot-test-4631-container'
properties: {
publicAccess: 'None'
}
}
}
resource iotIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' = {
name: 'iot-test-access-archive-4631'
location: resourceGroup().location
}
resource iotAccesToStorage 'Microsoft.Authorization/roleAssignments#2020-08-01-preview' = {
name: guid(extensionResourceId(messageArchive.id, messageArchive.type, 'iot-test-access-archive-4631'))
scope: messageArchive
properties: {
roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/ba92f5b4-2d11-453d-a403-e96b0029c9fe'
principalId: iotIdentity.properties.principalId
description: 'Allow acces for IoT Hub'
}
}
resource iothub 'Microsoft.Devices/IotHubs#2021-03-31' = {
name: 'iot-test-4631'
location: resourceGroup().location
sku: {
name: 'B1'
capacity: 1
}
identity: {
type: 'UserAssigned'
userAssignedIdentities:{
'${iotIdentity.id}': {}
}
}
dependsOn:[
iotAccesToStorage
]
properties: {
features: 'None'
eventHubEndpoints: {
events: {
retentionTimeInDays: 1
partitionCount: 4
}
}
routing: {
endpoints: {
storageContainers: [
{
name: 'messageArchive'
endpointUri: 'https://messagearchive4631.blob.core.windows.net/'
containerName: 'iot-test-4631-container'
batchFrequencyInSeconds: 100
maxChunkSizeInBytes: 104857600
encoding: 'Avro'
fileNameFormat: '{iothub}/{YYYY}/{MM}/{DD}/{HH}/{mm}_{partition}.avro'
authenticationType: 'identityBased'
}
]
}
routes: [
{
name: 'EventHub'
source: 'DeviceMessages'
endpointNames: [
'events'
]
isEnabled: true
}
{
name: 'messageArchiveRoute'
source: 'DeviceMessages'
endpointNames: [
'messageArchive'
]
isEnabled: true
}
]
fallbackRoute: {
source: 'DeviceMessages'
endpointNames: [
'events'
]
isEnabled: true
}
}
}
}
I tried removing the message routing block in IoT Hub
endpoints: {
storageContainers: [
{
name: 'messageArchive'
endpointUri: 'https://messagearchive4631.blob.core.windows.net/'
containerName: 'iot-test-4631-container'
batchFrequencyInSeconds: 100
maxChunkSizeInBytes: 104857600
encoding: 'Avro'
fileNameFormat: '{iothub}/{YYYY}/{MM}/{DD}/{HH}/{mm}_{partition}.avro'
authenticationType: 'identityBased'
}
]
}
and deploy it one time. This deployment works. If I then include the message routing block and deploy it again, then it works as expected.
Is it possible to do this in a single deployment?
I figured it out by myself. I'm using a user-assigned Managed Identity and therfore I was missing this in IoT Hub endpoint storage container configuration:
authenticationType: 'identityBased'
identity: {
userAssignedIdentity: iotIdentity.id
}
The complete IoT Hub endpoint configuration looks like this
endpoints: {
storageContainers: [
{
name: 'RawDataStore'
endpointUri: 'https://${nameRawDataStore}.blob.${environment().suffixes.storage}/'
containerName: nameIotHub
batchFrequencyInSeconds: 100
maxChunkSizeInBytes: 104857600
encoding: 'Avro'
fileNameFormat: '{iothub}/{YYYY}/{MM}/{DD}/{HH}/{mm}_{partition}.avro'
authenticationType: 'identityBased'
identity: {
userAssignedIdentity: iotIdentity.id
}
}
]
}

Istio virtual service header rules are not applied

So I have a very unique situation.
Problem
Virtual services route rules are not applied. We have a buzzfeed sso setup in our cluster. We wand to modify response headers to i.e Add header. to each request that matches the uri sign_in.
Buzzfeed sso has its own namespace.
Now To accomplish this I have created a virtual service.
Steps to Reproduce:
We used this virtual service spec to create the route rules.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sso-auth-injector
spec:
hosts:
- sso-auth
http:
- match:
- uri:
prefix: /sign_in
ignoreUriCase: true
route:
- destination:
host: sso-auth
headers:
response:
add:
foo: bar
request:
add:
hello: world
Analysis
Istioctk x describe has output
Pod: sso-auth-58744b56cd-lwqrh.sso
Pod Ports: 4180 (sso-auth), 15090 (istio-proxy)
Suggestion: add ‘app’ label to pod for Istio telemetry.
Suggestion: add ‘version’ label to pod for Istio telemetry.
Service: sso-auth.sso
Port: http 80/HTTP targets pod port 4180
Pod is PERMISSIVE (enforces HTTP/mTLS) and clients speak HTTP
VirtualService: sso-auth-injector.sso
/sign_in uncased
2) Istioctl . Not attaching all the rules but for outbound|80|
"routes": [
{
"match": {
"prefix": "/sign_in",
"caseSensitive": false
},
"route": {
"cluster": "outbound|80||sso-auth.sso.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxGrpcTimeout": "0s"
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking/v1alpha3/namespaces/sso/virtual-service/sso-auth-injector"
}
}
},
"decorator": {
"operation": "sso-auth.sso.svc.cluster.local:80/sign_in*"
},
"typedPerFilterConfig": {
"mixer": {
"#type": "type.googleapis.com/istio.mixer.v1.config.client.ServiceConfig",
"disableCheckCalls": true,
"mixerAttributes": {
"attributes": {
"destination.service.host": {
"stringValue": "sso-auth.sso.svc.cluster.local"
},
"destination.service.name": {
"stringValue": "sso-auth"
},
"destination.service.namespace": {
"stringValue": "sso"
},
"destination.service.uid": {
"stringValue": "istio://sso/services/sso-auth"
}
}
},
"forwardAttributes": {
"attributes": {
"destination.service.host": {
"stringValue": "sso-auth.sso.svc.cluster.local"
},
"destination.service.name": {
"stringValue": "sso-auth"
},
"destination.service.namespace": {
"stringValue": "sso"
},
"destination.service.uid": {
"stringValue": "istio://sso/services/sso-auth"
}
}
}
}
},
"requestHeadersToAdd": [
{
"header": {
"key": "hello",
"value": "world"
},
"append": true
}
],
"responseHeadersToAdd": [
{
"header": {
"key": "foo",
"value": "bar"
},
"append": true
}
]
}
]
},
Issues/Questions
These rules dont take affect. Each request is passed to the service but headers are not modified.
Shouldnt the route rules be applicable to inbound requests as opposed to outbound (as shown in config generated).
We want to modify response headers to i.e Add header. to each request that matches the uri sign_in
I made an example, tested it and everything works just fine.
Check below vs, tests and whole example.
Virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh
hosts:
- nginx.default.svc.cluster.local
http:
- name: match
headers:
response:
add:
foo: "bar"
match:
- uri:
prefix: /sign_in
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v1
Everything you need for test
apiVersion: v1
kind: Pod
metadata:
name: ubu1
spec:
containers:
- name: ubu1
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh
hosts:
- nginx.default.svc.cluster.local
http:
- name: match
headers:
response:
add:
foo: "bar"
match:
- uri:
prefix: /sign_in
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: v1
labels:
run: nginx1
Test from ubuntu pod
I used curl -I for displaying response headers
curl -I nginx/sign_in
HTTP/1.1 200 OK
server: envoy
date: Tue, 24 Mar 2020 07:44:10 GMT
content-type: text/html
content-length: 13
last-modified: Thu, 12 Mar 2020 06:52:43 GMT
etag: "5e69dc3b-d"
accept-ranges: bytes
x-envoy-upstream-service-time: 3
foo: bar
As you can see the foo:bar header is added correctly.
Additional links for headers
https://istiobyexample.dev/response-headers/
Istio adds and removed headers, but doesn't overwrite
How to display request headers with command line curl
In your istioctl analyze I see you might have an 503 error
"retriableStatusCodes": [
503
]
Additional links for 503 eror
https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes
https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule
Accessing service using istio ingress gives 503 error when mTLS is enabled

kubernetes e2e tests fails with spec.configSource: Invalid value

We are running kubernetes(1.15.3) e2e tests via sonobuoy and 3 of them fail with the same error:
/go/src/k8s-tests/test/e2e/framework/framework.go:674
error setting labels on node
Expected error:
<*errors.StatusError | 0xc001e6def0>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
Status: "Failure",
Message: "Node \"nightly-e2e-rhel76-1vm\" is invalid: spec.configSource: Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil",
Reason: "Invalid",
Details: {
Name: "nightly-e2e-rhel76-1vm",
Group: "",
Kind: "Node",
UID: "",
Causes: [
{
Type: "FieldValueInvalid",
Message: "Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil",
Field: "spec.configSource",
},
],
RetryAfterSeconds: 0,
},
Code: 422,
},
}
Node "nightly-e2e-rhel76-1vm" is invalid: spec.configSource: Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil
not to have occurred
/go/src/k8s-tests/test/e2e/apps/daemon_set.go:170
kubectl get nodes -o yaml gives these fields and yes, we do have kubelet dynamic config enabled:
spec:
configSource:
configMap:
kubeletConfigKey: kubelet
name: kubelet-config-1.15.3-1581671888
namespace: kube-system
[cloud-user#nightly-e2e-rhel76-1vm ~]$ kubectl get no nightly-e2e-rhel76-1vm -o json | jq .status.config
{
"active": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
},
"assigned": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
},
"lastKnownGood": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
}
}
What are we missing?
Thanks.