Kubernetes cluster role admin not able to get deployment status - authentication

I have the following role:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
When I do a kubectl proxy --port 8080 and then try doing
http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/cdp/deployments/{deploymentname}
I get a 200 and everything works fine. However when I do:
http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/cdp/deployments/{deploymentname}/status
I get forbidden and a 403 status back .
I also am able to do get, create, list,watch on deployments with my admin role .
Any idea as to why /status would give forbidden when I clearly have all the necessary permission as admin for my namespace.

You mentioned verbs of the role and you didn't mention resources and apiGroup. Make sure the following are set:
- apiGroups:
- apps
- extensions
resources:
- deployments/status

the status subresource doesn't give you any more information than simply fetching the deployment
The admin role permissions do not let you write deployment status. They let you create and delete the deployment objects, controlling the "spec" portion of the object. Status modification permissions are granted to the deployment controller.

Related

How can I configure the AdmissionConfiguration > PodSecurity > PodSecurityConfiguration in an EKS cluster?

If I understand right from Apply Pod Security Standards at the Cluster Level, in order to have a PSS (Pod Security Standard) as default for the whole cluster I need to create an AdmissionConfiguration in a file that the API server needs to consume during cluster creation.
I don't see any way to configure / provide the AdmissionConfiguration at CreateCluster , also I'm not sure how to provide this AdmissionConfiguration in a managed EKS node.
From the tutorials that use KinD or minikube it seems that the AdmissionConfiguration must be in a file that is referenced in the cluster-config.yaml, but if I'm not mistaken the EKS API server is managed and does not allow to change or even see this file.
The GitHub issue aws/container-roadmap Allow Access to AdmissionConfiguration seems to suggest that currently there is no possibility of providing AdmissionConfiguration at creation, but on the other hand aws-eks-best-practices says These exemptions are applied statically in the PSA admission controller configuration as part of the API server configuration
so, is there a way to provide PodSecurityConfiguration for the whole cluster in EKS? or I'm forced to just use per-namespace labels?
See also Enforce Pod Security Standards by Configuration the Built-in Admission Controller and EKS Best practices PSS and PSA
I don't think there is any way currently in EKS to provide configuration for the built-in PSA controller (Pod Security Admission controller).
But if you want to implement a cluster-wide default for PSS (Pod Security Standards) you can do that by installing the the official pod-security-webhook as a Dynamic Admission Controller in EKS.
git clone https://github.com/kubernetes/pod-security-admission
cd pod-security-admission/webhook
make certs
kubectl apply -k .
The default podsecurityconfiguration.yaml in pod-security-admission/webhook/manifests/020-configmap.yaml allows EVERYTHING so you should edit it and write something like
apiVersion: v1
kind: ConfigMap
metadata:
name: pod-security-webhook
namespace: pod-security-webhook
data:
podsecurityconfiguration.yaml: |
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
# Array of authenticated usernames to exempt.
usernames: []
# Array of runtime class names to exempt.
runtimeClasses: []
# Array of namespaces to exempt.
namespaces: ["policy-test2"]
then
kubectl apply -k .
kubectl -n pod-security-webhook rollout restart deployment/pod-security-webhook # otherwise the pods won't reread the configuration changes
After those changes you can verify that the default forbids privileged pods with:
kubectl --context aihub-eks-terraform create ns policy-test1
kubectl --context aihub-eks-terraform -n policy-test1 run --image=ecerulm/ubuntu-tools:latest --rm -ti rubelagu-$RANDOM --privileged
Error from server (Forbidden): admission webhook "pod-security-webhook.kubernetes.io" denied the request: pods "rubelagu-32081" is forbidden: violates PodSecurity "restricted:latest": privileged (container "rubelagu-32081" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "rubelagu-32081" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "rubelagu-32081" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "rubelagu-32081" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "rubelagu-32081" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Note: that you get the error forbidding privileged pods even when the namespace policy-test1 has no label pod-security.kubernetes.io/enforce, so you know that this rule comes from the pod-security-webhook that we just installed and configured.
Now if you want to create a pod you will be forced to create in a way that complies with the restricted PSS, by specifying runAsNonRoot, seccompProfile.type and capabilities and For example:
apiVersion: v1
kind: Pod
metadata:
name: test-1
spec:
restartPolicy: Never
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: test
image: ecerulm/ubuntu-tools:latest
imagePullPolicy: Always
command: ["/bin/bash", "-c", "--", "sleep 900"]
securityContext:
privileged: false
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL

RKE2 Authorized endpoint configuration help required

I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.

*.aclpolicy file not works - Auth using Active Directory

Summarizing my environment:
Running Rundeck (3.3.11) at Kuberenetes Cluster
Dedicated Database MariaDB connected via JDBC Connector.
Configured Active Directory via JAAS using the variables RUNDECK_JAAS_LDAP_ * and auth working, I can logon using my AD user.
Configured ACL Policy template using K8s Secret like in this Zoo sample:
volumeMounts:
- name: aclpolicy
mountPath: /home/rundeck/etc/rundeck-adm.aclpolicy
subPath: rundeck-adm.aclpolicy
volumes:
- name: aclpolicy
secret:
secretName: rundeck-adm-policy
items:
- key: rundeck-admin-role.yaml
path: rundeck-adm.aclpolicy
Variables exported to Rundeck Pod:
RUNDECK_JAAS_MODULES_0=JettyCombinedLdapLoginModule
RUNDECK_JAAS_LDAP_USERBASEDN=OU=Users,OU=MYBRAND,DC=corp,DC=MYDOMAIN
RUNDECK_JAAS_LDAP_ROLEBASEDN=OU=RundeckRoles,OU=Users,OU=MYBRAND,DC=corp,DC=MYDOMAIN
RUNDECK_JAAS_LDAP_FLAG=sufficient
RUNDECK_JAAS_LDAP_BINDDN=myrundeckuser#mybrand.mydomain
RUNDECK_JAAS_LDAP_BINDPASSWORD=foo
In my MS Active Directory the structure is:
-mybrand.mydomain
- MYBRAND
- Users
- RundeckRoles
- rundeck-adm (group with my user associated)
After I login returns this screen:
EDIT1:
My rundeck-admin-role.yaml:
description: Admin project level access control. Applies to resources within a specific project.
context:
project: '.*' # all projects
for:
resource:
- equals:
kind: job
allow: [create] # allow create jobs
- equals:
kind: node
allow: [read,create,update,refresh] # allow refresh node sources
- equals:
kind: event
allow: [read,create] # allow read/create events
adhoc:
- allow: [read,run,runAs,kill,killAs] # allow running/killing adhoc jobs
job:
- allow: [create,read,update,delete,run,runAs,kill,killAs] # allow create/read/write/delete/run/kill of all jobs
node:
- allow: [read,run] # allow read/run for nodes
by:
group: rundeck-adm
---
description: Admin Application level access control, applies to creating/deleting projects, admin of user profiles, viewing projects and reading system information.
context:
application: 'rundeck'
for:
resource:
- equals:
kind: project
allow: [create] # allow create of projects
- equals:
kind: system
allow: [read,enable_executions,disable_executions,admin] # allow read of system info, enable/disable all executions
- equals:
kind: system_acl
allow: [read,create,update,delete,admin] # allow modifying system ACL files
- equals:
kind: user
allow: [admin] # allow modify user profiles
project:
- match:
name: '.*'
allow: [read,import,export,configure,delete,admin] # allow full access of all projects or use 'admin'
project_acl:
- match:
name: '.*'
allow: [read,create,update,delete,admin] # allow modifying project-specific ACL files
storage:
- allow: [read,create,update,delete] # allow access for /ssh-key/* storage content
by:
group: rundeck-adm
Someone can help me to find my mistake?
Guys I found the trouble!
It was missing to add some variables RUNDECK_JAAS_LDAP_ROLEMEMBERATTRIBUTE and RUNDECK_JAAS_LDAP_ROLEOBJECTCLASS, by default if you don't declare that, Rundeck assume another values.
After I apply this vars and re-deploy my Rundeck Pod back works my access using my AD Account.
To help the community I'm making available the list of vars that I used in my deployment:
"JVM_MAX_RAM_PERCENTAGE"
"RUNDECK_DATABASE_URL"
"RUNDECK_DATABASE_DRIVER"
"RUNDECK_DATABASE_USERNAME"
"RUNDECK_DATABASE_PASSWORD"
"RUNDECK_LOGGING_AUDIT_ENABLED"
"RUNDECK_JAAS_MODULES_0"
"RUNDECK_JAAS_LDAP_FLAG"
"RUNDECK_JAAS_LDAP_PROVIDERURL"
"RUNDECK_JAAS_LDAP_BINDDN"
"RUNDECK_JAAS_LDAP_BINDPASSWORD"
"RUNDECK_JAAS_LDAP_USERBASEDN"
"RUNDECK_JAAS_LDAP_ROLEBASEDN"
"RUNDECK_GRAILS_URL"
"RUNDECK_SERVER_FORWARDED"
"RUNDECK_JAAS_LDAP_USERRDNATTRIBUTE"
"RUNDECK_JAAS_LDAP_USERIDATTRIBUTE"
"RUNDECK_JAAS_LDAP_ROLEMEMBERATTRIBUTE"
The JAAS plugin that I use was: JettyCombinedLdapLoginModule

Rundeck: http error 500: when logging in as admin

I've been trying to set up a rundeck server but run into several issues when the authentication provided either doesn't provide full access to projects and when I've tried to modify the config files, it then fails to authenticate as shown below.
HTTP ERROR: 500
Problem accessing /user/j_security_check. Reason:
java.io.IOException: Configuration Error:
No such file or directory
My jaas-loginmodule.conf looks like this:
com.dtolabs.rundeck.jetty.jaas.JettyCachingLdapLoginModule sufficient
debug="true"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
providerUrl="ldaps://sb2sys3.derivatives.com"
bindDn="uid=svldap,cn=users,cn=accounts,dc=derivatives,dc=com"
bindPassword="T0wR0pe!"
authenticationMethod="simple"
forceBindingLoginUseRootContextForRoles="true"
forceBindingLogin="true"
userBaseDn="cn=users,cn=accounts,dc=derivatives,dc=com"
userRdnAttribute="uid"
userIdAttribute="uid"
userPasswordAttribute="userPassword"
userObjectClass="inetOrgPerson"
roleBaseDn="cn=groups,cn=accounts,dc=derivatives,dc=com"
roleNameAttribute="cn"
roleMemberAttribute="member"
roleObjectClass="groupOfNames"
cacheDurationMillis="300000"
supplementalRoles="user"
reportStatistics="true";
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
debug="true"
file="/etc/rundeck/realm.properties";
};
I've also editted the realm.properties file to have a user with the role admin, which is also changed in the web.xml.
The current admin.aclpolicy looks like this:
description: Admin, all access.
context:
project: '.*' # all projects
for:
resource:
- allow: '*' # allow read/create all kinds
adhoc:
- allow: '*' # allow read/running/killing adhoc jobs
job:
- allow: '*' # allow read/write/delete/run/kill of all jobs
node:
- allow: '*' # allow read/run for all nodes
by:
group: admin
---
description: Admin, all access.
context:
application: 'rundeck'
for:
resource:
- allow: '*' # allow create of projects
project:
- allow: '*' # allow view/admin of all projects
project_acl:
- allow: '*' # allow admin of all project-level ACL policies
storage:
- allow: '*' # allow read/create/update/delete for all /keys/* storage content
by:
group: admin
The error you are receiving appears to be related to the JAAS_CONF variable.
I managed to reproduce the exact 500 error on a rpm installation with CentOS7.
By commenting out the JAAS_CONF variable from /etc/rundeck/profile and ,if you have set it, /etc/sysconfig/rundeckd or /etc/default/rundeckd, the error shows empty java.io.IOException with “Configuration Error: No such file or directory” so it may be a possibility that a mistype in those files may be affecting the authentication.
I would advise you to perform a complete check in those files in order to verify that everything is in order.
Hope it helps

Devise ldap authenticatable returns 401 error

I am making some application in rails in which user needs to be log-in first to see the content. I used devise and ldap_devise_authenticatable to make user log-in through existing LDAP account.
However, when I tried to log in with my account into my application then log-in failure occurs(401 error unauthorized) even though I already have account on ldap.
I am following this tutorial
And following is the screenshot of my ldap server page:
I know the problem is in my ldap configuration file. How can i configure it properly so that my application send correct string to ldap server like in above screenshot
Code of my ldap.yml is as follows:
authorizations: &AUTHORIZATIONS
group_base: ou=groups,dc=test,dc=com
required_groups:
- cn=admins,ou=groups,dc=test,dc=com
- cn=users,ou=groups,dc=test,dc=com
- ["moreMembers", "cn=users,ou=groups,dc=test,dc=com"]
require_attribute:
objectClass: inetOrgPerson
authorizationRole: postsAdmin
## Environment
development:
host: 172.16.100.6
port: 389
attribute: cn
base: ou=People,dc=iitj,dc=ac,dc=in
#admin_user: cn=admin,dc=test,dc=com
#admin_password: admin_password
ssl: false
Have you tried setting attribute: sAMAccountName?
Also you need to set admin_ser and admin_password to match the credentials of the account you have in LDAP.