KEDA Operator can't parse connection string for Azure Storage Queue - azure-storage-queues

I'm attempting to set up KEDA with azure storage queues for experimentation purposes. I want to use the connection string for the storage account for authentication purposes but the KEDA operator is unable to parse the connection string. The ScaledObject is created and its status is "Ready" but is unable to get information on queue length or talk to the queues at all. I have created an Opaque secret and referenced it using the authenticationRef section as described in the documentation.
The error I see in the logs is:
ERROR azure_queue_scaler error) {"error": "can't parse storage connection string. Missing key or name"}
I have carefully followed the documentation and looked at the KEDA source code but I'm still puzzled.
This is my secret definition:
kind: Secret
metadata:
name: azure-st-conn-string-secret
type: Opaque
data:
connectionString: MY_BASE_64_ENCODED_CONNECTION_STRING
And my TriggerAuth and ScaledObjects:
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: azure-queue-auth
spec:
secretTargetRef:
- parameter: connection
name: azure-st-conn-string-secret
key: connectionString
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: azure-queue-scaledobject
spec:
scaleTargetRef:
name: my-deployment-target
triggers:
- type: azure-queue
metadata:
queueLength: '5'
queueName: my-queue-name
authenticationRef:
name: azure-queue-auth
Having gone through the KEDA source code, this error would be thrown if the connection string is not in the right format. However, I verified that the secret, as created in the cluster, conforms exactly to what the source code expects, i.e:
DefaultEndpointsProtocol=https;AccountName=THE_ACCOUNT_NAME;AccountKey=THE_ACCOUNT_KEY;EndpointSuffix=core.windows.net
What am I doing wrong?

Related

Secure mTLS communication within Istio-knative services + external requests

We are converting existing k8s services to use istio & knative. The services receive requests from external users as well as from within the cluster. We are trying to setup Istio AuthorizationPolicy to achieve the below requirements:
Certain paths (like docs/healthchecks) should not require any special header or anything and must be accessible from anywhere
Health & metric collection paths required to be accessed by knative must be accisible only by knative controllers
Any request coming from outside the cluster (through knative-serving/knative-ingress-gateway basically) must contain a key header matching a pre-shared key
Any request coming from any service within the cluster can access all the paths
Below is a sample of what I am trying. I am able to get the first 3 requirements working but not the last one...
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: my-svc
namespace: my-ns
spec:
selector:
matchLabels:
serving.knative.dev/service: my-svc
action: "ALLOW"
rules:
- to:
- operation:
methods:
- "GET"
paths:
- "/docs"
- "/openapi.json"
- "/redoc"
- "/rest/v1/healthz"
- to:
- operation:
methods:
- "GET"
paths:
- "/healthz*"
- "/metrics*"
when:
- key: "request.headers[User-Agent]"
values:
- "Knative-Activator-Probe"
- "Go-http-client/1.1"
- to:
- operation:
paths:
- "/rest/v1/myapp*"
when:
- key: "request.headers[my-key]"
values:
- "asjhfhjgdhjsfgjhdgsfjh"
- from:
- source:
namespaces:
- "*"
We have made no changes to the mTLS configuration provided by default by istio-knative setup, so assume that the mtls mode is currently PERMISSIVE.
Details of tech stack involved
AWS EKS - Version 1.21
Knative Serving - Version 1.1 (with Istio
1.11.5)
I'm not an Istio expert, but you might be able to express the last policy based on either the ingress gateway (have one which is listening only on a ClusterIP address), or based on the SourceIP being within the cluster. For the latter, I'd want to test that Istio is using the actual SourceIP and not substituting in the Forwarded header's IP address (a different reasonable configuration).

502 Bad Gateway Error After Instituting AuthorizationPolicy from Istio Documentation

i'm using Istio 1.5.4 and trying apply the example referenced here:
https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#end-user-authentication
Everything works as expected until defining the AuthorizationPolicy - the moment i introduce that i would get a 502 Bad Gateway error regardless if i provide a valid JWT token or not.
On a secondary note, I'm able to get the AuthorizationPolicy to work properly if i update the example to be applied at my own service namespaced level. Then RequestAuthentication + AuthorizationPolicy would work as expected, however, i would run into a different roadblock where now internal service would also require a valid jwt token.
authentication/authorization internal service issue
I've discovered that the 502 is a result of my loadbalancer health check failing due to the AuthorizationPolicy applied. Adding a conditional header User-Agent against my healh check probe seems to do the trick, but then i get back the net effect where no token provided is still getting through
No token is getting through because that´s how you configured your AuthorizationPolicy, that´s how source: requestPrincipals: ["*"] works. Take a look at this example.
RequestAuthentication defines what request authentication methods are supported by a workload. If will reject a request if the request contains invalid authentication information, based on the configured authentication rules. A request that does not contain any authentication credentials will be accepted but will not have any authenticated identity. To restrict access to authenticated requests only, this should be accompanied by an authorization rule. Examples:
Require JWT for all request for workloads that have label app:httpbin
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: httpbin
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: "issuer-foo"
jwksUri: https://example.com/.well-known/jwks.json
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
rules:
- from:
- source:
requestPrincipals: ["*"]
Use requestPrincipals: ["testing#secure.istio.io/testing#secure.istio.io"] instead as mentioned here, then it will accept only requests with token.
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
jwtRules:
- issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.5/security/tools/jwt/samples/jwks.json"
The second resource is an AuthorizationPolicy, which ensures that all requests have a JWT - and rejects requests that do not, returning a 403 error.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: require-jwt
namespace: default
spec:
selector:
matchLabels:
app: frontend
action: ALLOW
rules:
- from:
- source:
requestPrincipals: ["testing#secure.istio.io/testing#secure.istio.io"]
Once we apply these resources, we can curl the Istio ingress gateway without a JWT, and see that the AuthorizationPolicy is rejecting our request because we did not supply a token:
$ curl ${INGRESS_IP}
RBAC: access denied
Finally, if we curl with a valid JWT, we can successfully reach the frontend via the IngressGateway:
$ curl --header "Authorization: Bearer ${VALID_JWT}" ${INGRESS_IP}
Hello World! /

Debugging istio rate limiting handler

I'm trying to apply rate limiting on some of our internal services (inside the mesh).
I used the example from the docs and generated redis rate limiting configurations that include a (redis) handler, quota instance, quota spec, quota spec binding and rule to apply the handler.
This redis handler:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: redishandler
namespace: istio-system
spec:
compiledAdapter: redisquota
params:
redisServerUrl: <REDIS>:6379
connectionPoolSize: 10
quotas:
- name: requestcountquota.instance.istio-system
maxAmount: 10
validDuration: 100s
rateLimitAlgorithm: FIXED_WINDOW
overrides:
- dimensions:
destination: s1
maxAmount: 1
- dimensions:
destination: s3
maxAmount: 1
- dimensions:
destination: s2
maxAmount: 1
The quota instance (I'm only interested in limiting by destination at the moment):
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: istio-system
spec:
compiledTemplate: quota
params:
dimensions:
destination: destination.labels["app"] | destination.service.host | "unknown"
A quota spec, charging 1 per request if I understand correctly:
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
A quota binding spec that all participating services pre-fetch. I also tried with service: "*" which also did nothing.
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: s2
namespace: default
- name: s3
namespace: default
- name: s1
namespace: default
# - service: '*' # Uncomment this to bind *all* services to request-count
A rule to apply the handler. Currently on all occasions (tried with matches but didn't change anything as well):
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: redishandler
instances:
- requestcountquota
The VirtualService definitions are pretty similar for all participants:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: s1
spec:
hosts:
- s1
http:
- route:
- destination:
host: s1
The problem is nothing really happens and no rate limiting takes place. I tested with curl from pods inside the mesh. The redis instance is empty (no keys on db 0, which I assume is what the rate limiting would use) so I know it can't practically rate-limit anything.
The handler seems to be configured properly (how can I make sure?) because I had some errors in it which were reported in mixer (policy). There are still some errors but none which I associate to this problem or the configuration. The only line in which redis handler is mentioned is this:
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
But its unclear if its a problem or not. I assume its not.
These are the rest of the lines from the reload once I deploy:
2019-12-17T13:44:22.601644Z info Built new config.Snapshot: id='43'
2019-12-17T13:44:22.601866Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.601881Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.602718Z info adapters Waiting for kubernetes cache sync... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903844Z info adapters Cache sync successful. {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903878Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903882Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-12-17T13:44:22.904808Z info Setting up event handlers
2019-12-17T13:44:22.904939Z info Starting Secrets controller
2019-12-17T13:44:22.904991Z info Waiting for informer caches to sync
2019-12-17T13:44:22.957893Z info Cleaning up handler table, with config ID:42
2019-12-17T13:44:22.957924Z info adapters deleted remote controller {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.957999Z info adapters adapter closed all scheduled daemons and workers {"adapter": "prometheus.istio-system"}
2019-12-17T13:44:22.958041Z info adapters adapter closed all scheduled daemons and workers {"adapter": "redishandler.istio-system"}
2019-12-17T13:44:22.958065Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958050Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958096Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958182Z info adapters shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:23.958109Z info adapters adapter closed all scheduled daemons and workers {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:55:21.042131Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-17T14:14:00.265722Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I'm using the demo profile with disablePolicyChecks: false to enable rate limiting. This is on istio 1.4.0, deployed on EKS.
I also tried memquota (this is our staging environment) with low limits and nothing seems to work. I never got a 429 no matter how much I went over the rate limit configured.
I don't know how to debug this and see where the configuration is wrong causing it to do nothing.
Any help is appreciated.
I too spent hours trying to decipher the documentation and get a sample working.
According to the documentation, they recommended that we enable policy checks:
https://istio.io/docs/tasks/policy-enforcement/rate-limiting/
However when that did not work, I did an "istioctl profile dump", searched for policy, and tried several settings.
I used Helm install and passed the following and then was able to get the described behaviour:
--set global.disablePolicyChecks=false \
--set values.pilot.policy.enabled=true \ ===> this made it work, but it's not in the docs.

Lets Encrypt DNS challenge using HTTP

I'm trying to setup a Let's Encrypt certificate on Google Cloud. I recently changed it from http01 to dns01 challenge type so that I could create Cloud DNS zones and the acme challenge TXT record would automatically be added.
Here's my certificate.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: san-tls
namespace: default
spec:
secretName: san-tls
issuerRef:
name: letsencrypt
commonName: www.evolut.net
altNames:
- portal.evolut.net
dnsNames:
- www.evolut.net
- portal.evolut.net
acme:
config:
- dns01:
provider: clouddns
domains:
- www.evolut.net
- portal.evolut.net
However now I get the following error when I kubectl describe certificate:
Message: DNS names on TLS certificate not up to date: ["portal.evolut.net" "www.evolut.net"]
Reason: DoesNotMatch
Status: False
Type: Ready
More worryingly, when I kubectl describe order I see the following:
Status:
Challenges:
Authz URL: https://acme-v02.api.letsencrypt.org/acme/authz/redacted
Config:
Http 01:
Dns Name: portal.evolut.net
Issuer Ref:
Kind: Issuer
Name: letsencrypt
Key: redacted
Token: redacted
Type: http-01
URL: https://acme-v02.api.letsencrypt.org/acme/challenge/redacted
Wildcard: false
Authz URL: https://acme-v02.api.letsencrypt.org/acme/authz/redacted
Config:
Http 01:
Notice how the Type is always http-01, although in the certificate they are listed under dns01.
This means that the ACME TXT file is never created in Cloud DNS and of course the domains aren't validated.
This seems to be related an issue related to the use of multiple domains. I suggest the use of two different namespaces. You can check an example in the following link:
Failed to list *v1alpha1.Order: orders.certmanager.k8s.io is forbidden

Istio Authorization with JWT

I am running isio 1.0.2 and am unable to configure service authorization based on JWT claims against Azure AD.
I have succesfully configured and validated Azure AD oidc jwt end user authentication and it works fine.
Now I'd like to configure RBAC Authorization using request.auth.claims["preferred_username"] attribute.
I've created a ServiceRoleBinding like below:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: service-reader
namespace: default
spec:
rules:
- services: ["myservice.default.svc.cluster.local"]
methods: ["GET"]
paths: ["*/products"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: service-reader-binding
namespace: default
spec:
subjects:
- properties:
source.principal: "*"
request.auth.claims["preferred_username"]: "user#company.com"
roleRef:
kind: ServiceRole
name: "service-reader"
However, I keep getting 403 Forbidden from the service proxy, even though preferred_username claim from Authentication header is correct.
If I comment out request.auth.claims["preferred_username"]: "user#company.com" line the request succeeds.
Can anyone point me in the right direction regarding configuring authorization based on oidc and jwt?
Never mind. I found the problem.
I was missing user: "*" check to allow all users.
so under subjects it should say:
subjects:
- user: "*"
properties:
source.principal: "*"
request.auth.claims["preferred_username"]: "user#company.com"
That fixes it.