I've deployed ArgoCD using the following terraform, which uses the argoproj help chart.
resource "helm_release" "argo_cd" {
chart = "argo-cd"
repository = "https://argoproj.github.io/argo-helm"
name = "argocd"
namespace = var.namespace
}
I knew when I created this that I needed to do two things. I needed to:
Add API access to the default admin user
Add new users
It appears that the solution to both #1 and #2 are to edit the configmap deployed with the helm chart. I've seen the question here about this very thing, but I've tried setting
I did try adding a kubernetes_config_map like the following to add a new local user (I didn't see how to add api_key access to the default admin account yet):
resource "kubernetes_config_map" "local_users" {
metadata {
#Also tried generate_name here
name = "argocd-cm"
namespace = "argocd"
labels = {
"app.kubernetes.io/name" = "argocd-cm"
"app.kubernetes.io/part-of" = "argocd"
}
}
data = {
"accounts.new-user" = "apiKey, login"
"accounts.new-user.enabled" = "true"
}
}
When I deployed this, it said the configmap named argocd-cm already exists. Which makes sense. So, as seen from the question posted above, I tried setting to generate_name, which went through, but I don't see the new user. And I can't query the API list names using argocd account list, because my default admin user doesn't have API access.
So, I need to know how to edit the existing configmap in terraform in order to add new users (assuming that is in fact the best method), and grant permission of the default admin user in order to use the API as that user. Any help is hugely appreciated.
Also from the question above, I did try using the set values, but I'm fairly certain I'm formatting them incorrectly, or this isn't how you do it (shown below):
resource "helm_release" "argo_cd" {
chart = "argo-cd"
repository = "https://argoproj.github.io/argo-helm"
name = "argocd"
namespace = var.namespace
set {
name = "accounts.new-user"
value = "apiKey, login"
}
}
Which just gives me a parsing error failed parsing key "accounts.new-user" with value apiKey, login, key " login" has no value
The current configmap named argocd-cm looks like this:
kubectl get configmap argocd-cm -n argocd -o yaml
apiVersion: v1
data:
application.instanceLabelKey: argocd.argoproj.io/instance
exec.enabled: "false"
server.rbac.log.enforce.enable: "false"
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: argocd
meta.helm.sh/release-namespace: argocd
creationTimestamp: "2022-07-02T14:06:49Z"
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
helm.sh/chart: argo-cd-4.9.7
name: argocd-cm
namespace: argocd
resourceVersion: "4382002"
uid: da102b9c-45d9-41j8-9c9a-f8cbca79b003
Related
I've create an IRSA role in terraform so that the associated service account can be used by a K8s job to access an S3 bucket but I keep getting an AccessDenied error within the job.
I first enabled IRSA in our EKS cluster with enable_irsa = true in our eks module.
I then created a simple aws_iam_policy as:
resource "aws_iam_policy" "eks_s3_access_policy" {
name = "eks_s3_access_policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:*",
]
Effect = "Allow"
Resource = "arn:aws:s3:::*"
},
]
})
}
and a iam-assumable-role-with-oidc:
module "iam_assumable_role_with_oidc_for_s3_access" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "~> 3.0"
create_role = true
role_name = "eks-s3-access"
role_description = "Role to access s3 bucket"
tags = { Role = "eks_s3_access_policy" }
provider_url = replace(module.eks.cluster_oidc_issuer_url, "https://", "")
role_policy_arns = [aws_iam_policy.eks_s3_access_policy.arn]
number_of_role_policy_arns = 1
oidc_fully_qualified_subjects = ["system:serviceaccount:default:my-user"]
}
I created a K8s service account using Helm like:
Name: my-user
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::111111:role/eks-s3-access
meta.helm.sh/release-name: XXXX
meta.helm.sh/release-namespace: default
Image pull secrets: <none>
Mountable secrets: my-user-token-kwwpq
Tokens: my-user-token-kwwpq
Events: <none>
Finally, jobs are created using the K8s API from a job template:
apiVersion: batch/v1
kind: Job
metadata:
name: job
namespace: default
spec:
template:
spec:
serviceAccountName: my-user
containers:
- name: {{ .Chart.Name }}
env:
- name: AWS_ROLE_ARN
value: arn:aws:iam::746181457053:role/eks-s3-access
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
volumeMounts:
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
volumes:
- name: aws-iam-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: sts.amazonaws.com
expirationSeconds: 86400
path: token
When the job attempts to get the specified credentials, however, the specified token is not there:
2021-08-03 18:02:41 Refreshing temporary credentials failed during mandatory refresh period.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 291, in _protected_refresh
metadata = await self._refresh_using()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 345, in fetch_credentials
return await self._get_cached_credentials()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 355, in _get_cached_credentials
response = await self._get_credentials()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 410, in _get_credentials
kwargs = self._assume_role_kwargs()
File "/usr/local/lib/python3.7/site-packages/aiobotocore/credentials.py", line 420, in _assume_role_kwargs
identity_token = self._web_identity_token_loader()
File "/usr/local/lib/python3.7/site-packages/botocore/utils.py", line 2365, in __call__
with self._open(self._web_identity_token_path) as token_file:
FileNotFoundError: [Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token'
From what is described in https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/ a webhook typically creates these credentials when the pod is created. However, since we're creating the new k8s' job on demand within the k8s cluster, I suspect that the webhook is not creating any such credentials.
How can I request the correct credentials to be created from within a K8s cluster? Is there a way to instantiate the webhook from within the cluster?
There are a couple of things that could cause this to fail.
Check all settings for the IRSA role. For the trust relationship setting check the name of the namespace and the name of service account are correct. Only if these settings match the role can be assumed.
While the pod is running try to access the pod with a shell. Check the content of the "AWS_*" environment variables. Check AWS_ROLE_ARN points to the correct role. Check, if the file which AWS_WEB_IDENTITY_TOKEN_FILE points to, is in its place and it is readable. Just try to do a cat on the file to see if it is readable.
If you are running your pod a non-root (which is recommended for security reasons) make sure the user who is running the pod has access to the file. If not, adjust the securityContext for the pod. Maybe the setting of fsGroup can help here. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context
Make sure the SDK your pos is using supports IRSA. If you are using older SDKs IRSA may not be supported. Look into the IRSA documentation for supported SDK versions. https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html
I am trying to deploy a rabbitmq-cluster on minikube based on the chart of Bitnami and facing the following challenge: When I try to pass the credentials using a secret I am getting this error: couldn't find key rabbitmq-password in Secret default/rabbit
I created a secret called rabbit in my minikube cluster and tryed to set the values-file like this:
auth:
# username: user
# password: pass
existingPasswordSecret: rabbit
and also like this:
auth:
username: ${RABBITMQ_USERNAME}
password: ${RABBITMQ_PASSWORD}
existingPasswordSecret: rabbit
This is my secret-file:
apiVersion: v1
kind: Secret
metadata:
name: rabbit
type: Opaque
data:
RABBITMQ_USERNAME: dXNlcg== (bitnami variable)
RABBITMQ_PASSWORD: cGFzcw== (bitnami variable)
This is the default secret of the chart (I am installing the chart using helm install rabbitmq -f rabbitmq/values.yml bitnami/rabbitmq):
apiVersion: v1
kind: Secret
metadata:
name: {{ include "rabbitmq.fullname" . }}
namespace: {{ .Release.Namespace | quote }}
labels: {{- include "common.labels.standard" . | nindent 4 }}
type: Opaque
data:
{{- if not .Values.auth.existingPasswordSecret }}
{{- if .Values.auth.password }}
rabbitmq-password: {{ .Values.auth.password | b64enc | quote }}
{{- else }}
rabbitmq-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
{{- end }}
The error message is telling you that you are missing the rabbitmq-password key in your secret:
couldn't find key rabbitmq-password in Secret default/rabbit
If we take a look at your secret, we can see you are providing two keys (RABBITMQ_USERNAME and RABBITMQ_PASSWORD), but not the rabbitmq-password key it expects:
apiVersion: v1
kind: Secret
metadata:
name: rabbit
type: Opaque
data:
RABBITMQ_USERNAME: dXNlcg== (bitnami variable)
RABBITMQ_PASSWORD: cGFzcw== (bitnami variable)
Knowing this, you have to provide your password using rabbitmq-password instead of RABBITMQ_PASSWORD. The chart does not provide support for passing the user as a secret, tho. Your secret should look like this:
apiVersion: v1
kind: Secret
metadata:
name: rabbit
type: Opaque
data:
rabbitmq-password: cGFzcw== (bitnami variable)
I was trying to set up Authorization Policy by following Istio 1.5 Security,
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: require-jwt
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
action: ALLOW
rules:
- from:
- source:
requestPrincipals: ["testing#secure.istio.io/testing#secure.istio.io"]
But when I apply this policy for my service then I get ‘RBAC: access denied’
Please find the envoy proxy logs below,
[Envoy (Epoch 0)] [2020-03-27 14:40:31.225][24][debug][rbac] [external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:68] checking request: remoteAddress: 10.1.0.65:57780, localAddress: 10.1.0.64:9080, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account, subjectPeerCertificate: ,
headers: ‘:authority’, ‘localhost’
‘:path’, ‘/productpage’
‘:method’, ‘GET’
‘content-type’, ‘application/json’
**‘authorization’, ‘Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IkRIRmJwb0lVcXJZOHQyenBBMnFYZkNtcjVWTzVaRXI0UnpIVV8tZW52dlEiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjM1MzczOTExMDQsImdyb3VwcyI6WyJncm91cDEiLCJncm91cDIiXSwiaWF0IjoxNTM3MzkxMTA0LCJpc3MiOiJ0ZXN0aW5nQHNlY3VyZS5pc3Rpby5pbyIsInNjb3BlIjpbInNjb3BlMSIsInNjb3BlMiJdLCJzdWIiOiJ0ZXN0aW5nQHNlY3VyZS5pc3Rpby5pbyJ9.EdJnEZSH6X8hcyEii7c8H5lnhgjB5dwo07M5oheC8Xz8mOllyg–AHCFWHybM48reunF–oGaG6IXVngCEpVF0_P5DwsUoBgpPmK1JOaKN6_pe9sh0ZwTtdgK_RP01PuI7kUdbOTlkuUi2AO-qUyOm7Art2POzo36DLQlUXv8Ad7NBOqfQaKjE9ndaPWT7aexUsBHxmgiGbz1SyLH879f7uHYPbPKlpHU6P9S-DaKnGLaEchnoKnov7ajhrEhGXAQRukhDPKUHO9L30oPIr5IJllEQfHYtt6IZvlNUGeLUcif3wpry1R5tBXRicx2sXMQ7LyuDremDbcNy_iE76Upg’**
‘user-agent’, ‘PostmanRuntime/7.22.0’
‘accept’, ‘/’
‘cache-control’, ‘no-cache’
‘postman-token’, ‘f06a794e-1bd7-4c03-ad78-1638a309b71a’
‘accept-encoding’, ‘gzip, deflate, br’
‘content-length’, ‘4868’
‘x-forwarded-for’, ‘192.168.65.3’
‘x-forwarded-proto’, ‘http’
‘x-request-id’, ‘012804b1-67ca-942d-9636-40478e932e75’
‘x-b3-traceid’, ‘f8f9e4f94847aec5ce7dec347a5bfa5d’
‘x-b3-spanid’, ‘ce7dec347a5bfa5d’
‘x-b3-sampled’, ‘1’
‘x-envoy-internal’, ‘true’
‘x-forwarded-client-cert’, ‘By=spiffe://cluster.local/ns/default/sa/bookinfo-productpage;Hash=5e82efecebbaf212aae6359cec7cbc0b6aa281ddeaf3e7adb280c503a5c04a5f;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account’
, dynamicMetadata: filter_metadata {
key: “istio_authn”
value {
fields {
key: “request.auth.principal”
value {
string_value: “cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account”
}
}
fields {
key: “source.namespace”
value {
string_value: “istio-system”
}
}
fields {
key: “source.principal”
value {
string_value: “cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account”
}
}
fields {
key: “source.user”
value {
string_value: “cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account”
}
}
}
}
**[Envoy (Epoch 0)] [2020-03-27 14:40:31.225][24][debug][rbac] [external/envoy/source/extensions/filters/http/rbac/rbac_filter.cc:111] enforced denied**
[2020-03-27T14:40:31.224Z] “GET /productpage HTTP/1.1” 403 - “-” “-” 0 19 1 - “192.168.65.3” “PostmanRuntime/7.22.0” “012804b1-67ca-942d-9636-40478e932e75” “localhost” “-” - - 10.1.0.64:9080 192.168.65.3:0 outbound_.9080_._.productpage.default.svc.cluster.local -
Please help me to solve this issue. Thanks in advance
Try to update istio to v 1.5.1.
According to istio documentation there was a bug fixed that affected authentication policy security.istio.io/v1beta1 that You are using:
Fixed OpenID discovery does not work with beta request authentication policy (Issue 21954)
To perform istio upgrade please review istio upgrade documentation page.
Hope it helps.
I am using Spring Cloud Config Server to serve configuration for my client apps. To facilitate secrets configuration I am using HashiCorp Vault as a back end. For the remainder of the configuration I am using a GIT repo. So I have configured the config server in composite mode. See my config server bootstrap.yml below:-
server:
port: 8888
spring:
profiles:
active: local, git, vault
application:
name: my-domain-configuration-server
cloud:
config:
server:
git:
uri: https://mygit/my-domain-configuration
order: 1
vault:
order: 2
host: vault.mydomain.com
port: 8200
scheme: https
backend: mymount/generic
This is all working as expected. However, the token I am using is secured with a Vault auth policy. See below:-
{
"rules": "path "mymount/generic/myapp-app,local" {
policy = "read"
}
path "mymount/generic/myapp-app,local/*" {
policy = "read"
}
path "mymount/generic/myapp-app" {
policy = "read"
}
path "mymount/generic/myapp-app/*" {
policy = "read"
}
path "mymount/generic/application,local" {
policy = "read"
}
path "mymount/generic/application,local/*" {
policy = "read"
}
path "mymount/generic/application" {
policy = "read"
}
path "mymount/generic/application/*" {
policy = "read"
}"
}
My issue is that I am not storing secrets in all these scopes. I need to specify all these paths just so I can authorize the token to read one secret from mymount/generic/myapp-app,local. If I do not authorize all the other paths the VaultEnvironmentRepository.read() method returns a 403 HTTP status code (Forbidden) and throws a VaultException. This results in complete failure to retrieve any configuration for the app, including GIT based configuration. This is very limiting as client apps may have multiple Spring profiles that have nothing to do with retrieving configuration items. The issue is that config server will attempt to retrieve configuration for all the active profiles provided by the client.
Is there a way to enable fault tolerance or lenience on the config server, so that VaultEnvironmentRepository does not abort and returns any configuration that it is actually authorized to return?
Do you absolutely need the local profile? Would you not be able to get by with just the 'vault' and 'git' profiles in Config Server and use the 'default' profile in each Spring Boot application?
If you use the above suggestion then the only two paths you'd need in your rules (.hcl) file are:
path "mymount/generic/application" {
capabilities = ["read", "list"]
}
and
path "mymount/generic/myapp-app" {
capabilities = ["read", "list"]
}
This assumes that you're writing configuration to
vault write mymount/generic/myapp-app
and not
vault write mymount/generic/myapp-app,local
or similar.
I'm looking into OpenShift3 API to refresh a authenticated token.
I've authenticated and have a existing token but would need to refresh it every 30mins.
From the documentation - https://docs.openshift.com/enterprise/3.0/rest_api/openshift_v1.html#create-a-oauthaccesstoken
It requires a v1.OAuthAccessToken object but everything is optional, calling it with a empty object returns with this error:
{ kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'User "admin01" cannot create oauthaccesstokens at the cluster scope',
reason: 'Forbidden',
details: { kind: 'oauthaccesstokens' },
code: 403 }
Anyone any help? Thanks in advance.