We are converting existing k8s services to use istio & knative. The services receive requests from external users as well as from within the cluster. We are trying to setup Istio AuthorizationPolicy to achieve the below requirements:
Certain paths (like docs/healthchecks) should not require any special header or anything and must be accessible from anywhere
Health & metric collection paths required to be accessed by knative must be accisible only by knative controllers
Any request coming from outside the cluster (through knative-serving/knative-ingress-gateway basically) must contain a key header matching a pre-shared key
Any request coming from any service within the cluster can access all the paths
Below is a sample of what I am trying. I am able to get the first 3 requirements working but not the last one...
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: my-svc
namespace: my-ns
spec:
selector:
matchLabels:
serving.knative.dev/service: my-svc
action: "ALLOW"
rules:
- to:
- operation:
methods:
- "GET"
paths:
- "/docs"
- "/openapi.json"
- "/redoc"
- "/rest/v1/healthz"
- to:
- operation:
methods:
- "GET"
paths:
- "/healthz*"
- "/metrics*"
when:
- key: "request.headers[User-Agent]"
values:
- "Knative-Activator-Probe"
- "Go-http-client/1.1"
- to:
- operation:
paths:
- "/rest/v1/myapp*"
when:
- key: "request.headers[my-key]"
values:
- "asjhfhjgdhjsfgjhdgsfjh"
- from:
- source:
namespaces:
- "*"
We have made no changes to the mTLS configuration provided by default by istio-knative setup, so assume that the mtls mode is currently PERMISSIVE.
Details of tech stack involved
AWS EKS - Version 1.21
Knative Serving - Version 1.1 (with Istio
1.11.5)
I'm not an Istio expert, but you might be able to express the last policy based on either the ingress gateway (have one which is listening only on a ClusterIP address), or based on the SourceIP being within the cluster. For the latter, I'd want to test that Istio is using the actual SourceIP and not substituting in the Forwarded header's IP address (a different reasonable configuration).
Related
I was using istio 1.8.6, and now we have migrated to 1.14.5.
After this upgrade the AuthorizationPolicy stops to working as it was previously.
In my case, I have 2 namespaces, and I want to restrict my namespace-1 to only accept requests coming from namespace-2. Services in namespace-1 cannot call other services in that same namespace-1.
This is the AuthorizationPolicy:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-only-ns-1
namespace: namespace-1
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["namespace-2"]
I have a api gateway running in namespace-2 to map/route all services in namespace-1.
So, if an service in namespace-1 needs to call another service in that namspace, it must call it by the api gateway running in namespace-2.
This is a flow example allowed:
service-1.namespace-1 -> api-gateway.namespace-2 -> service-2.namespace-1
This is a flow example NOT allowed:
service-1.namespace-1 -> service-2.namespace-1
After this istio upgrade (1.14.5), the AuthorizationPolicy has stopped to work. This new version starts to block that requests with error: 403 Forbidden (RBAC).The services are not allowed to receive requests from nowhere.
The old version (1.8.6) was working correctly in namespace-1, blocking requests coming from namespace-1 and allowing requests from namespace-2.
Any idea was is going on?
As a newbie in microservices architecture, I need to ask a few questions about implementing JWT authentication using Kong.
The architecture of my application looks like in the picture below:
So far I have only used Kong as a proxy and load balancer. The Authentication Service was responsible for creating the token. The token was created during registration and logging in. During registration or logging in, the authentication service asked the user service, and the user service checked the user's data in the mongodb database. Each endpoint from the other services had to receive a JWT in the header and had a function along with a secret which decoded the token. However, it seems to me that this is an unnecessary duplication of code and the whole process of creating and decoding JWT may or even should be done in Kong with JWT plugin.
I tried to follow a couple of tutorials and YouTube guides just like this one:
JWT Kong Gateway
Unfortunately each of the tutorials shows how to create a JWT only for a single consumer, without Kong being connected to the base.
My kong.yml file:
_format_version: "3.0"
_transform: true
services:
- name: building_service
url: http://building_service/building
routes:
- name: building_service_route
paths:
- /building
- name: user_service
url: http://user_service/user
routes:
- name: user_service_route
paths:
- /user
- name: role_service
url: http://role_service/role
routes:
- name: role_service_route
paths:
- /role
- name: task_service
url: http://task_service/task
routes:
- name: task_service_route
paths:
- /task
- name: authorization_service
url: http://authorization_service/authorization
routes:
- name: authorization_service_route
paths:
- /authorization
plugins:
- name: jwt
route: building_service_route
enabled: true
config:
key_claim_name: kid
claims_to_verify:
- exp
# consumers:
# - username: login_server_issuer
# jwt_secrets:
# - consumer: login_server_issuer
# secret: "secret-hash-brown-bear-market-rate-limit"
- name: bot-detection
- name: rate-limiting
config:
minute: 60
policy: local
Kongo service in docker-compose.yml:
services:
kong:
build: ./App/kong
volumes:
- ./App/kong/kong.yml:/usr/local/kong/declarative/kong.yml
container_name: kong
environment:
KONG_DATABASE: 'off'
KONG_PROXY_ACCESS_LOG: '/dev/stdout'
KONG_ADMIN_ACCESS_LOG: '/dev/stdout'
KONG_PROXY_ERROR_LOG: '/dev/stderr'
KONG_ADMIN_ERROR_LOG: '/dev/stderr'
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DECLARATIVE_CONFIG: "/usr/local/kong/declarative/kong.yml"
command: "kong start"
networks:
- api-network
ports:
- "8000:8000"
- "8443:8443"
- "127.0.0.1:8001:8001"
- "127.0.0.1:8444:8444"
List of my questions:
How authentication service should connect to Kong and create JWT with chosen user (as I understand consumer) data?
Should Kong be somehow connected to database to get required user data and create secret?
How to decode JWT with kong and transfer it to other services in header?
Can anyone provide an example of how to achieve desired result?
Do I misunderstood something about JWT or Kong and what I want to achieve is impossible?
If you can consider using Keycloak for user management, then you can have a look at the jwt-keycloak plugin:
https://github.com/gbbirkisson/kong-plugin-jwt-keycloak
i'm using Istio 1.5.4 and trying apply the example referenced here:
https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#end-user-authentication
Everything works as expected until defining the AuthorizationPolicy - the moment i introduce that i would get a 502 Bad Gateway error regardless if i provide a valid JWT token or not.
On a secondary note, I'm able to get the AuthorizationPolicy to work properly if i update the example to be applied at my own service namespaced level. Then RequestAuthentication + AuthorizationPolicy would work as expected, however, i would run into a different roadblock where now internal service would also require a valid jwt token.
authentication/authorization internal service issue
I've discovered that the 502 is a result of my loadbalancer health check failing due to the AuthorizationPolicy applied. Adding a conditional header User-Agent against my healh check probe seems to do the trick, but then i get back the net effect where no token provided is still getting through
No token is getting through because that´s how you configured your AuthorizationPolicy, that´s how source: requestPrincipals: ["*"] works. Take a look at this example.
RequestAuthentication defines what request authentication methods are supported by a workload. If will reject a request if the request contains invalid authentication information, based on the configured authentication rules. A request that does not contain any authentication credentials will be accepted but will not have any authenticated identity. To restrict access to authenticated requests only, this should be accompanied by an authorization rule. Examples:
Require JWT for all request for workloads that have label app:httpbin
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: httpbin
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: "issuer-foo"
jwksUri: https://example.com/.well-known/jwks.json
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
rules:
- from:
- source:
requestPrincipals: ["*"]
Use requestPrincipals: ["testing#secure.istio.io/testing#secure.istio.io"] instead as mentioned here, then it will accept only requests with token.
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
jwtRules:
- issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.5/security/tools/jwt/samples/jwks.json"
The second resource is an AuthorizationPolicy, which ensures that all requests have a JWT - and rejects requests that do not, returning a 403 error.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: require-jwt
namespace: default
spec:
selector:
matchLabels:
app: frontend
action: ALLOW
rules:
- from:
- source:
requestPrincipals: ["testing#secure.istio.io/testing#secure.istio.io"]
Once we apply these resources, we can curl the Istio ingress gateway without a JWT, and see that the AuthorizationPolicy is rejecting our request because we did not supply a token:
$ curl ${INGRESS_IP}
RBAC: access denied
Finally, if we curl with a valid JWT, we can successfully reach the frontend via the IngressGateway:
$ curl --header "Authorization: Bearer ${VALID_JWT}" ${INGRESS_IP}
Hello World! /
I'm trying to apply rate limiting on some of our internal services (inside the mesh).
I used the example from the docs and generated redis rate limiting configurations that include a (redis) handler, quota instance, quota spec, quota spec binding and rule to apply the handler.
This redis handler:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: redishandler
namespace: istio-system
spec:
compiledAdapter: redisquota
params:
redisServerUrl: <REDIS>:6379
connectionPoolSize: 10
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 10
validDuration: 100s
rateLimitAlgorithm: FIXED_WINDOW
overrides:
- dimensions:
destination: s1
maxAmount: 1
- dimensions:
destination: s3
maxAmount: 1
- dimensions:
destination: s2
maxAmount: 1
The quota instance (I'm only interested in limiting by destination at the moment):
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
destination: destination.labels["app"] | destination.service.host | "unknown"
A quota spec, charging 1 per request if I understand correctly:
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
A quota binding spec that all participating services pre-fetch. I also tried with service: "*" which also did nothing.
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: s2
namespace: default
- name: s3
namespace: default
- name: s1
namespace: default
# - service: '*' # Uncomment this to bind *all* services to request-count
A rule to apply the handler. Currently on all occasions (tried with matches but didn't change anything as well):
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: redishandler
instances:
- requestcountquota
The VirtualService definitions are pretty similar for all participants:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: s1
spec:
hosts:
- s1
http:
- route:
- destination:
host: s1
The problem is nothing really happens and no rate limiting takes place. I tested with curl from pods inside the mesh. The redis instance is empty (no keys on db 0, which I assume is what the rate limiting would use) so I know it can't practically rate-limit anything.
The handler seems to be configured properly (how can I make sure?) because I had some errors in it which were reported in mixer (policy). There are still some errors but none which I associate to this problem or the configuration. The only line in which redis handler is mentioned is this:
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
But its unclear if its a problem or not. I assume its not.
These are the rest of the lines from the reload once I deploy:
2019-12-17T13:44:22.601644Z info Built new config.Snapshot: id='43'
2019-12-17T13:44:22.601866Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.601881Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.602718Z info adapters Waiting for kubernetes cache sync... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903844Z info adapters Cache sync successful. {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903878Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903882Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.904808Z info Setting up event handlers
2019-12-17T13:44:22.904939Z info Starting Secrets controller
2019-12-17T13:44:22.904991Z info Waiting for informer caches to sync
2019-12-17T13:44:22.957893Z info Cleaning up handler table, with config ID:42
2019-12-17T13:44:22.957924Z info adapters deleted remote controller {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.957999Z info adapters adapter closed all scheduled daemons and workers {"adapter": "prometheus.istio-system"}
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
2019-12-17T13:44:22.958065Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958050Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958096Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958182Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:23.958109Z info adapters adapter closed all scheduled daemons and workers {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:55:21.042131Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-17T14:14:00.265722Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I'm using the demo profile with disablePolicyChecks: false to enable rate limiting. This is on istio 1.4.0, deployed on EKS.
I also tried memquota (this is our staging environment) with low limits and nothing seems to work. I never got a 429 no matter how much I went over the rate limit configured.
I don't know how to debug this and see where the configuration is wrong causing it to do nothing.
Any help is appreciated.
I too spent hours trying to decipher the documentation and get a sample working.
According to the documentation, they recommended that we enable policy checks:
https://istio.io/docs/tasks/policy-enforcement/rate-limiting/
However when that did not work, I did an "istioctl profile dump", searched for policy, and tried several settings.
I used Helm install and passed the following and then was able to get the described behaviour:
--set global.disablePolicyChecks=false \
--set values.pilot.policy.enabled=true \ ===> this made it work, but it's not in the docs.
We have an AWS ElasticBeanstalk application. There are various environments, some load balanced some not.
At the moment with the load balanced ones, the SSL is configured manually in the console, whereas the single instance ones are configured via the .ebextensions file (and only for the single instance deployments).
Is there a way to configure the SSL for load balancers via the .ebextensions file as well, so we can keep it all in one place, and automate it?
I did not tried this yet, but while reading the documentation, I've discover that it is possible to automate it.
If you have any lucky on following the instructions in the documentation, please let me know.
Update:
I actually tested, and yes, it is possible. Here follow a configuration example:
option_settings:
- namespace: aws:elb:listener:443
option_name: ListenerProtocol
value: HTTPS
- namespace: aws:elb:listener:443
option_name: InstancePort
value: 80
- namespace: aws:elb:listener:443
option_name: InstanceProtocol
value: HTTP
- namespace: aws:elb:listener:443
option_name: SSLCertificateId
value: arn:aws:iam::<your arn cert id here>
- namespace: aws:elb:listener:80
option_name: ListenerEnabled
value: true
- namespace: aws:elb:listener:443
option_name: ListenerEnabled
value: true