Adding SASL Authentication in Kafka Banzai - authentication

I am looking to add SASL Plaintext authentication in Banzai Kafka. I have added following configs in my read only config section.
readOnlyConfig: |
auto.create.topics.enable=false
cruise.control.metrics.topic.auto.create=true
cruise.control.metrics.topic.num.partitions=1
cruise.control.metrics.topic.replication.factor=2
delete.topic.enable=true
offsets.topic.replication.factor=2
group.initial.rebalance.delay.ms=3000
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
listener.name.external.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="testuser";
I have scripted following in listener config
listenersConfig:
externalListeners:
- type: "sasl_plaintext"
name: "external"
externalStartingPort: 51985
containerPort: 29094
accessMethod: LoadBalancer
internalListeners:
- type: "plaintext"
name: "internal"
containerPort: 29092
usedForInnerBrokerCommunication: true
- type: "plaintext"
name: "controller"
containerPort: 29093
usedForInnerBrokerCommunication: false
usedForControllerCommunication: true
When I try to connect producer or consumer - kafka returns Authentication Authorization failed error.
I am setting following properties:
session.timeout.ms=60000
partition.assignment.strategy=org.apache.kafka.clients.consumer.StickyAssignor
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="testuser";
Can any one suggest on this?

Related

Inconsistent behaviour while achieving stickiness using Kong Ingress controller

I am using Kong ingress controller on EKS.
High level flow:
NLB → Kong ingress controller and proxy(running in the same pod) → k8s service → backend pods
I am trying to achieve stickiness using hash_on cookies configuration on upstream.
I am using session and hmac_auth plugin for generating session/cookie.
1st request from the client: First time when the client sends a message to the NLB, NLB sends the traffic to Kong ingress controller and from there it’s goes to one of the backend pods. This is the first time and so Kong will generate a cookie and send it back in the response to the client.
2nd request from the client: Now second time when client is sending the request it is including the cookie as well it got from the response of 1st request. Now when the request comes to Kong it forwards the request to some other pod, other than the pod it forwarded the request for the first time.
On 3rd, 4th…nth request Kong is forwarding the request to the same pod it forwarded to in the 2nd request.
How can we achieve stickiness for every request ?
My expectation was first time when Kong receives a request from a client it will generate a Cookie and it will put some detail specific to the pod it is sending traffic to and next time whenever the same client sends a request it will send the cookie with it, kong should use the cookie and forward the request to the same pod it forwarded the first time…but this is not happening…I am getting stickiness after 2nd to nth request but not for the 1st request.
`Ingress resource used for defining path:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
konghq.com/strip-path: "true"
name: kong-ingress-bk-srvs
namespace: default
spec:
ingressClassName: kong
rules:
- http:
paths:
- backend:
service:
name: httpserver-service-cip
port:
number: 8084
path: /api/v1/serverservice
pathType: Prefix
- backend:
service:
name: httpserver-service-cip-health
port:
number: 8084
path: /api/v1/healthservice
pathType: Prefix`
`upstream config:
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: stickiness-upstream
upstream:
hash_on: cookie
hash_on_cookie: my-test-cookie
hash_on_cookie_path: /`
`session plugin:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: session-plugin
config:
cookie_path: /
cookie_name: my-test-cookie
storage: cookie
cookie_secure: false
cookie_httponly: false
cookie_samesite: None
plugin: session`
`hmac plugin
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: hmac-plugin
config:
validate_request_body: true
enforce_headers:
- date
- request-line
- digest
algorithms:
- hmac-sha512
plugin: hmac-auth`
`consumer:
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
name: kong-consumer
annotations:
kubernetes.io/ingress.class: kong
username: consumer-user-3
custom_id: consumer-id-3
credentials:
- kong-cred
`
`Pod service config:(ingress backend service)
apiVersion: v1
kind: Service
metadata:
annotations:
konghq.com/override: stickiness-upstream
konghq.com/plugins: session-plugin,hmac-plugin
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"configuration.konghq.com":"stickiness-upstream"},"labels":{"app":"httpserver"},"name":"httpserver-service-cip","namespace":"default"},"spec":{"ports":[{"name":"comm-port","port":8085,"targetPort":8085},{"name":"dur-port","port":8084,"targetPort":8084}],"selector":{"app":"httpserver"},"sessionAffinity":"ClientIP","sessionAffinityConfig":{"clientIP":{"timeoutSeconds":10000}}}}
creationTimestamp: "2023-02-04T16:44:00Z"
labels:
app: httpserver
name: httpserver-service-cip
namespace: default
resourceVersion: "6729057"
uid: 481b7d8c-1f07-4293-809c-3b4b7dca41e0
spec:
clusterIP: 10.101.99.87
clusterIPs:
- 10.101.99.87
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: comm-port
port: 8085
protocol: TCP
targetPort: 8085
- name: dur-port
port: 8084
protocol: TCP
targetPort: 8084
selector:
app: httpserver
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
type: ClusterIP
status:
loadBalancer: {}`

GCP API Gateway: Path parameters are being passed as query params

I'm trying to use GCP API Gateway to create a single endpoint for a couple of my backend services (A,B,C,D), each with their own path structure. I have the Gateway configured for one of the services as follows:
swagger: '2.0'
info:
title: <TITLE>
description: <DESC>
version: 1.0.0
schemes:
- https
produces:
- application/json
paths:
/service_a/match/{id_}:
get:
summary: <SUMMARY>
description: <DESC>
operationId: match_id_
parameters:
- required: true
type: string
name: id_
in: path
- required: true
type: boolean
default: false
name: bool_first
in: query
- required: false
type: boolean
default: false
name: bool_Second
in: query
x-google-backend:
address: <cloud_run_url>/match/{id_}
deadline: 60.0
responses:
'200':
description: Successful Response
'422':
description: Validation
This deploys just fine. But when I hit the endpoint gateway_url/service_a/match/123, it gets routed to cloud_run_url/match/%7Bid_%7D?id_=123 instead of cloud_run_url/match/123.
How can I fix this?
Editing my answer as I misunderstood the issue.
It seems like the { are getting leaked from your configuration as ASCII code, so when you call
x-google-backend:
address: <cloud_run_url>/match/{id_}
deadline: 60.0
it doesn't show the correct ID.
So this should be a leak issue from your yaml file and you can approach this the same way as in this thread about using path params

Authorization Policy Issue when followed Istio 1.5 Security

I was trying to set up Authorization Policy by following Istio 1.5 Security,
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: require-jwt
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
action: ALLOW
rules:
- from:
- source:
requestPrincipals: ["testing#secure.istio.io/testing#secure.istio.io"]
But when I apply this policy for my service then I get ‘RBAC: access denied’
Please find the envoy proxy logs below,
[Envoy (Epoch 0)] [2020-03-27 14:40:31.225][24][debug][rbac] [external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:68] checking request: remoteAddress: 10.1.0.65:57780, localAddress: 10.1.0.64:9080, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account, subjectPeerCertificate: ,
headers: ‘:authority’, ‘localhost’
‘:path’, ‘/productpage’
‘:method’, ‘GET’
‘content-type’, ‘application/json’
**‘authorization’, ‘Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IkRIRmJwb0lVcXJZOHQyenBBMnFYZkNtcjVWTzVaRXI0UnpIVV8tZW52dlEiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjM1MzczOTExMDQsImdyb3VwcyI6WyJncm91cDEiLCJncm91cDIiXSwiaWF0IjoxNTM3MzkxMTA0LCJpc3MiOiJ0ZXN0aW5nQHNlY3VyZS5pc3Rpby5pbyIsInNjb3BlIjpbInNjb3BlMSIsInNjb3BlMiJdLCJzdWIiOiJ0ZXN0aW5nQHNlY3VyZS5pc3Rpby5pbyJ9.EdJnEZSH6X8hcyEii7c8H5lnhgjB5dwo07M5oheC8Xz8mOllyg–AHCFWHybM48reunF–oGaG6IXVngCEpVF0_P5DwsUoBgpPmK1JOaKN6_pe9sh0ZwTtdgK_RP01PuI7kUdbOTlkuUi2AO-qUyOm7Art2POzo36DLQlUXv8Ad7NBOqfQaKjE9ndaPWT7aexUsBHxmgiGbz1SyLH879f7uHYPbPKlpHU6P9S-DaKnGLaEchnoKnov7ajhrEhGXAQRukhDPKUHO9L30oPIr5IJllEQfHYtt6IZvlNUGeLUcif3wpry1R5tBXRicx2sXMQ7LyuDremDbcNy_iE76Upg’**
‘user-agent’, ‘PostmanRuntime/7.22.0’
‘accept’, ‘/’
‘cache-control’, ‘no-cache’
‘postman-token’, ‘f06a794e-1bd7-4c03-ad78-1638a309b71a’
‘accept-encoding’, ‘gzip, deflate, br’
‘content-length’, ‘4868’
‘x-forwarded-for’, ‘192.168.65.3’
‘x-forwarded-proto’, ‘http’
‘x-request-id’, ‘012804b1-67ca-942d-9636-40478e932e75’
‘x-b3-traceid’, ‘f8f9e4f94847aec5ce7dec347a5bfa5d’
‘x-b3-spanid’, ‘ce7dec347a5bfa5d’
‘x-b3-sampled’, ‘1’
‘x-envoy-internal’, ‘true’
‘x-forwarded-client-cert’, ‘By=spiffe://cluster.local/ns/default/sa/bookinfo-productpage;Hash=5e82efecebbaf212aae6359cec7cbc0b6aa281ddeaf3e7adb280c503a5c04a5f;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account’
, dynamicMetadata: filter_metadata {
key: “istio_authn”
value {
fields {
key: “request.auth.principal”
value {
string_value: “cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account”
}
}
fields {
key: “source.namespace”
value {
string_value: “istio-system”
}
}
fields {
key: “source.principal”
value {
string_value: “cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account”
}
}
fields {
key: “source.user”
value {
string_value: “cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account”
}
}
}
}
**[Envoy (Epoch 0)] [2020-03-27 14:40:31.225][24][debug][rbac] [external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:111] enforced denied**
[2020-03-27T14:40:31.224Z] “GET /productpage HTTP/1.1” 403 - “-” “-” 0 19 1 - “192.168.65.3” “PostmanRuntime/7.22.0” “012804b1-67ca-942d-9636-40478e932e75” “localhost” “-” - - 10.1.0.64:9080 192.168.65.3:0 outbound_.9080_._.productpage.default.svc.cluster.local -
Please help me to solve this issue. Thanks in advance
Try to update istio to v 1.5.1.
According to istio documentation there was a bug fixed that affected authentication policy security.istio.io/v1beta1 that You are using:
Fixed OpenID discovery does not work with beta request authentication policy (Issue 21954)
To perform istio upgrade please review istio upgrade documentation page.
Hope it helps.

How to resolve this node affinity with Envoy

I provide a gRPC service that unfortunately has to have node affinity between BeginTransaction and Commit API Calls.
The Consumer API calls sequence is typically:
BeginTransaction() returns txnID
DoStuff(txnID, moreParams...)
DoStuff(txnID, moreParams...)
...
Commit(txnID)
Consumers can be multithreaded processes that make simultaneous calls to my API, so they might be using hundreds of Transactions at any point in time.
If I use Envoy proxy as my Service entry point, BeginTransaction should be routed to any healthy node in the cluster, but it must ensure that subsequent calls that use the returned txnID are routed to the same node.
Passing any context info in http headers, or in whatsoever part of the messages, is acceptable in my case.
I made some progress using Ring Hash balancer
In the envoy proxy server (look for "hash"):
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: http2
stat_prefix: ingress_http #just for statistics
route_config:
name: local_route
virtual_hosts:
- name: samplefront_virtualhost
domains:
- "*"
routes:
- match:
prefix: "/mycompany.sample.v1"
grpc: {}
route:
cluster: sampleserver
hash_policy:
header:
header_name: "x-session-hash"
- match:
prefix: "/bbva.sample.admin"
grpc: {}
route:
cluster: sampleadmin
http_filters:
- name: envoy.router
config: {}
clusters:
- name: sampleserver
connect_timeout: 0.25s
type: strict_dns
lb_policy: ring_hash
http2_protocol_options: {}
hosts:
- socket_address:
address: sampleserver
port_value: 80 #Connect to the Sidecard Envoy
- name: sampleadmin
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: sampleadmin
port_value: 80 #Connect to the Sidecard Envoy
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
In my consumers, I create a random hash just before BeginTransaction() and I make sure it is sent in the x-session-hash header every single time until Commit(txnId)
It works but it has some limitations:
When I scale up the service, adding more nodes, some operations fail with error upstream connect error or disconnect/reset before headers. Failures are absolutely ok when one node is lost, but they are hardly acceptable when a node is added!!! Good news is that the load gets rebalanced in both cases.
The client must generate the hash before the first call (BeginTransaction) is made, so is the client who is inadvertently dictating which node will attend the requests for this transaction.
I will keep investigating.

Cannot add SSL certificate on ELB using Ansible

I'm trying to create a elastic load balancer and use existing SSL certificate to secure it as follows -
---
- name: Setting up Elastic Load Balancer
hosts: local
connection: local
gather_facts: False
vars_files:
- vars/global_vars.yml
tasks:
- local_action:
name: "TestLoadbalancer"
module: ec2_elb_lb
state: present
region: 'us-east-1'
zones:
- us-east-1c
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
listeners:
- protocol: ssl
load_balancer_port: 443
instance_protocol: tcp
instance_port: 7286
ssl_certificate_id: "arn:aws:iam::xxxxxx:server-certificate/LB_cert"
- local_action:
name: "TestLoadbalancer"
module: ec2_elb_lb
state: present
region: 'us-east-1'
zones:
- us-east-1c
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: http
ping_port: 80
ping_path: "/"
response_timeout: 5
interval: 30
unhealthy_threshold: 2
healthy_threshold: 10
But it is not adding the listener: SSL - TCP
Another listener is added and is visible in the console: HTTP/80
Why the SSL one is missing? Am I missing any required parameters?
You are adding multiple keys with the name "listeners":
listeners:
- protocol: http
...
listeners:
- protocol: ssl
...
But glancing at the example in the documentation it should be:
listeners:
- protocol: http
...
- protocol: ssl
...