I am trying to deploy a rabbitmq-cluster on minikube based on the chart of Bitnami and facing the following challenge: When I try to pass the credentials using a secret I am getting this error: couldn't find key rabbitmq-password in Secret default/rabbit
I created a secret called rabbit in my minikube cluster and tryed to set the values-file like this:
auth:
# username: user
# password: pass
existingPasswordSecret: rabbit
and also like this:
auth:
username: ${RABBITMQ_USERNAME}
password: ${RABBITMQ_PASSWORD}
existingPasswordSecret: rabbit
This is my secret-file:
apiVersion: v1
kind: Secret
metadata:
name: rabbit
type: Opaque
data:
RABBITMQ_USERNAME: dXNlcg== (bitnami variable)
RABBITMQ_PASSWORD: cGFzcw== (bitnami variable)
This is the default secret of the chart (I am installing the chart using helm install rabbitmq -f rabbitmq/values.yml bitnami/rabbitmq):
apiVersion: v1
kind: Secret
metadata:
name: {{ include "rabbitmq.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
type: Opaque
data:
{{- if not .Values.auth.existingPasswordSecret }}
{{- if .Values.auth.password }}
rabbitmq-password: {{ .Values.auth.password | b64enc | quote }}
{{- else }}
rabbitmq-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
{{- end }}
The error message is telling you that you are missing the rabbitmq-password key in your secret:
couldn't find key rabbitmq-password in Secret default/rabbit
If we take a look at your secret, we can see you are providing two keys (RABBITMQ_USERNAME and RABBITMQ_PASSWORD), but not the rabbitmq-password key it expects:
apiVersion: v1
kind: Secret
metadata:
name: rabbit
type: Opaque
data:
RABBITMQ_USERNAME: dXNlcg== (bitnami variable)
RABBITMQ_PASSWORD: cGFzcw== (bitnami variable)
Knowing this, you have to provide your password using rabbitmq-password instead of RABBITMQ_PASSWORD. The chart does not provide support for passing the user as a secret, tho. Your secret should look like this:
apiVersion: v1
kind: Secret
metadata:
name: rabbit
type: Opaque
data:
rabbitmq-password: cGFzcw== (bitnami variable)
Related
I am using Kong ingress controller on EKS.
High level flow:
NLB → Kong ingress controller and proxy(running in the same pod) → k8s service → backend pods
I am trying to achieve stickiness using hash_on cookies configuration on upstream.
I am using session and hmac_auth plugin for generating session/cookie.
1st request from the client: First time when the client sends a message to the NLB, NLB sends the traffic to Kong ingress controller and from there it’s goes to one of the backend pods. This is the first time and so Kong will generate a cookie and send it back in the response to the client.
2nd request from the client: Now second time when client is sending the request it is including the cookie as well it got from the response of 1st request. Now when the request comes to Kong it forwards the request to some other pod, other than the pod it forwarded the request for the first time.
On 3rd, 4th…nth request Kong is forwarding the request to the same pod it forwarded to in the 2nd request.
How can we achieve stickiness for every request ?
My expectation was first time when Kong receives a request from a client it will generate a Cookie and it will put some detail specific to the pod it is sending traffic to and next time whenever the same client sends a request it will send the cookie with it, kong should use the cookie and forward the request to the same pod it forwarded the first time…but this is not happening…I am getting stickiness after 2nd to nth request but not for the 1st request.
`Ingress resource used for defining path:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
konghq.com/strip-path: "true"
name: kong-ingress-bk-srvs
namespace: default
spec:
ingressClassName: kong
rules:
- http:
paths:
- backend:
service:
name: httpserver-service-cip
port:
number: 8084
path: /api/v1/serverservice
pathType: Prefix
- backend:
service:
name: httpserver-service-cip-health
port:
number: 8084
path: /api/v1/healthservice
pathType: Prefix`
`upstream config:
apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
name: stickiness-upstream
upstream:
hash_on: cookie
hash_on_cookie: my-test-cookie
hash_on_cookie_path: /`
`session plugin:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: session-plugin
config:
cookie_path: /
cookie_name: my-test-cookie
storage: cookie
cookie_secure: false
cookie_httponly: false
cookie_samesite: None
plugin: session`
`hmac plugin
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: hmac-plugin
config:
validate_request_body: true
enforce_headers:
- date
- request-line
- digest
algorithms:
- hmac-sha512
plugin: hmac-auth`
`consumer:
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
name: kong-consumer
annotations:
kubernetes.io/ingress.class: kong
username: consumer-user-3
custom_id: consumer-id-3
credentials:
- kong-cred
`
`Pod service config:(ingress backend service)
apiVersion: v1
kind: Service
metadata:
annotations:
konghq.com/override: stickiness-upstream
konghq.com/plugins: session-plugin,hmac-plugin
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"configuration.konghq.com":"stickiness-upstream"},"labels":{"app":"httpserver"},"name":"httpserver-service-cip","namespace":"default"},"spec":{"ports":[{"name":"comm-port","port":8085,"targetPort":8085},{"name":"dur-port","port":8084,"targetPort":8084}],"selector":{"app":"httpserver"},"sessionAffinity":"ClientIP","sessionAffinityConfig":{"clientIP":{"timeoutSeconds":10000}}}}
creationTimestamp: "2023-02-04T16:44:00Z"
labels:
app: httpserver
name: httpserver-service-cip
namespace: default
resourceVersion: "6729057"
uid: 481b7d8c-1f07-4293-809c-3b4b7dca41e0
spec:
clusterIP: 10.101.99.87
clusterIPs:
- 10.101.99.87
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: comm-port
port: 8085
protocol: TCP
targetPort: 8085
- name: dur-port
port: 8084
protocol: TCP
targetPort: 8084
selector:
app: httpserver
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
type: ClusterIP
status:
loadBalancer: {}`
I've deployed ArgoCD using the following terraform, which uses the argoproj help chart.
resource "helm_release" "argo_cd" {
chart = "argo-cd"
repository = "https://argoproj.github.io/argo-helm"
name = "argocd"
namespace = var.namespace
}
I knew when I created this that I needed to do two things. I needed to:
Add API access to the default admin user
Add new users
It appears that the solution to both #1 and #2 are to edit the configmap deployed with the helm chart. I've seen the question here about this very thing, but I've tried setting
I did try adding a kubernetes_config_map like the following to add a new local user (I didn't see how to add api_key access to the default admin account yet):
resource "kubernetes_config_map" "local_users" {
metadata {
#Also tried generate_name here
name = "argocd-cm"
namespace = "argocd"
labels = {
"app.kubernetes.io/name" = "argocd-cm"
"app.kubernetes.io/part-of" = "argocd"
}
}
data = {
"accounts.new-user" = "apiKey, login"
"accounts.new-user.enabled" = "true"
}
}
When I deployed this, it said the configmap named argocd-cm already exists. Which makes sense. So, as seen from the question posted above, I tried setting to generate_name, which went through, but I don't see the new user. And I can't query the API list names using argocd account list, because my default admin user doesn't have API access.
So, I need to know how to edit the existing configmap in terraform in order to add new users (assuming that is in fact the best method), and grant permission of the default admin user in order to use the API as that user. Any help is hugely appreciated.
Also from the question above, I did try using the set values, but I'm fairly certain I'm formatting them incorrectly, or this isn't how you do it (shown below):
resource "helm_release" "argo_cd" {
chart = "argo-cd"
repository = "https://argoproj.github.io/argo-helm"
name = "argocd"
namespace = var.namespace
set {
name = "accounts.new-user"
value = "apiKey, login"
}
}
Which just gives me a parsing error failed parsing key "accounts.new-user" with value apiKey, login, key " login" has no value
The current configmap named argocd-cm looks like this:
kubectl get configmap argocd-cm -n argocd -o yaml
apiVersion: v1
data:
application.instanceLabelKey: argocd.argoproj.io/instance
exec.enabled: "false"
server.rbac.log.enforce.enable: "false"
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: argocd
meta.helm.sh/release-namespace: argocd
creationTimestamp: "2022-07-02T14:06:49Z"
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
helm.sh/chart: argo-cd-4.9.7
name: argocd-cm
namespace: argocd
resourceVersion: "4382002"
uid: da102b9c-45d9-41j8-9c9a-f8cbca79b003
I've create an IRSA role in terraform so that the associated service account can be used by a K8s job to access an S3 bucket but I keep getting an AccessDenied error within the job.
I first enabled IRSA in our EKS cluster with enable_irsa = true in our eks module.
I then created a simple aws_iam_policy as:
resource "aws_iam_policy" "eks_s3_access_policy" {
name = "eks_s3_access_policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:*",
]
Effect = "Allow"
Resource = "arn:aws:s3:::*"
},
]
})
}
and a iam-assumable-role-with-oidc:
module "iam_assumable_role_with_oidc_for_s3_access" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "~> 3.0"
create_role = true
role_name = "eks-s3-access"
role_description = "Role to access s3 bucket"
tags = { Role = "eks_s3_access_policy" }
provider_url = replace(module.eks.cluster_oidc_issuer_url, "https://", "")
role_policy_arns = [aws_iam_policy.eks_s3_access_policy.arn]
number_of_role_policy_arns = 1
oidc_fully_qualified_subjects = ["system:serviceaccount:default:my-user"]
}
I created a K8s service account using Helm like:
Name: my-user
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::111111:role/eks-s3-access
meta.helm.sh/release-name: XXXX
meta.helm.sh/release-namespace: default
Image pull secrets: <none>
Mountable secrets: my-user-token-kwwpq
Tokens: my-user-token-kwwpq
Events: <none>
Finally, jobs are created using the K8s API from a job template:
apiVersion: batch/v1
kind: Job
metadata:
name: job
namespace: default
spec:
template:
spec:
serviceAccountName: my-user
containers:
- name: {{ .Chart.Name }}
env:
- name: AWS_ROLE_ARN
value: arn:aws:iam::746181457053:role/eks-s3-access
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
volumeMounts:
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
volumes:
- name: aws-iam-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: sts.amazonaws.com
expirationSeconds: 86400
path: token
When the job attempts to get the specified credentials, however, the specified token is not there:
2021-08-03 18:02:41 Refreshing temporary credentials failed during mandatory refresh period.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 291, in _protected_refresh
metadata = await self._refresh_using()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 345, in fetch_credentials
return await self._get_cached_credentials()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 355, in _get_cached_credentials
response = await self._get_credentials()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 410, in _get_credentials
kwargs = self._assume_role_kwargs()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 420, in _assume_role_kwargs
identity_token = self._web_identity_token_loader()
File "/usr/local/lib/python3.7/site-packages/botocore/utils.py", line 2365, in __call__
with self._open(self._web_identity_token_path) as token_file:
FileNotFoundError: [Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token'
From what is described in https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/ a webhook typically creates these credentials when the pod is created. However, since we're creating the new k8s' job on demand within the k8s cluster, I suspect that the webhook is not creating any such credentials.
How can I request the correct credentials to be created from within a K8s cluster? Is there a way to instantiate the webhook from within the cluster?
There are a couple of things that could cause this to fail.
Check all settings for the IRSA role. For the trust relationship setting check the name of the namespace and the name of service account are correct. Only if these settings match the role can be assumed.
While the pod is running try to access the pod with a shell. Check the content of the "AWS_*" environment variables. Check AWS_ROLE_ARN points to the correct role. Check, if the file which AWS_WEB_IDENTITY_TOKEN_FILE points to, is in its place and it is readable. Just try to do a cat on the file to see if it is readable.
If you are running your pod a non-root (which is recommended for security reasons) make sure the user who is running the pod has access to the file. If not, adjust the securityContext for the pod. Maybe the setting of fsGroup can help here. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context
Make sure the SDK your pos is using supports IRSA. If you are using older SDKs IRSA may not be supported. Look into the IRSA documentation for supported SDK versions. https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html
I am looking to add SASL Plaintext authentication in Banzai Kafka. I have added following configs in my read only config section.
readOnlyConfig: |
auto.create.topics.enable=false
cruise.control.metrics.topic.auto.create=true
cruise.control.metrics.topic.num.partitions=1
cruise.control.metrics.topic.replication.factor=2
delete.topic.enable=true
offsets.topic.replication.factor=2
group.initial.rebalance.delay.ms=3000
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
listener.name.external.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="testuser";
I have scripted following in listener config
listenersConfig:
externalListeners:
- type: "sasl_plaintext"
name: "external"
externalStartingPort: 51985
containerPort: 29094
accessMethod: LoadBalancer
internalListeners:
- type: "plaintext"
name: "internal"
containerPort: 29092
usedForInnerBrokerCommunication: true
- type: "plaintext"
name: "controller"
containerPort: 29093
usedForInnerBrokerCommunication: false
usedForControllerCommunication: true
When I try to connect producer or consumer - kafka returns Authentication Authorization failed error.
I am setting following properties:
session.timeout.ms=60000
partition.assignment.strategy=org.apache.kafka.clients.consumer.StickyAssignor
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="testuser";
Can any one suggest on this?
I would like to route traffic on HTTP Headers with Traefik. In case there is no matching rules, I need to route to another service or return a custom status code (426). Is it possible to configure default case for rules ?
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: headers
spec:
entrypoints:
- web
- websecure
routes:
- match: Headers(`X-ROUTE`,`Apache`)
kind: Rule
services:
- name: apache
port: 80
- match: Headers(`X-ROUTE`,`nginx`)
kind: Rule
services:
- name: nginx
port: 80
- else ??
You can add this case to match anything with the lowest priority which is 1.
- match: HostRegexp(`{catchall:.*}`)