Summarizing my environment:
Running Rundeck (3.3.11) at Kuberenetes Cluster
Dedicated Database MariaDB connected via JDBC Connector.
Configured Active Directory via JAAS using the variables RUNDECK_JAAS_LDAP_ * and auth working, I can logon using my AD user.
Configured ACL Policy template using K8s Secret like in this Zoo sample:
volumeMounts:
- name: aclpolicy
mountPath: /home/rundeck/etc/rundeck-adm.aclpolicy
subPath: rundeck-adm.aclpolicy
volumes:
- name: aclpolicy
secret:
secretName: rundeck-adm-policy
items:
- key: rundeck-admin-role.yaml
path: rundeck-adm.aclpolicy
Variables exported to Rundeck Pod:
RUNDECK_JAAS_MODULES_0=JettyCombinedLdapLoginModule
RUNDECK_JAAS_LDAP_USERBASEDN=OU=Users,OU=MYBRAND,DC=corp,DC=MYDOMAIN
RUNDECK_JAAS_LDAP_ROLEBASEDN=OU=RundeckRoles,OU=Users,OU=MYBRAND,DC=corp,DC=MYDOMAIN
RUNDECK_JAAS_LDAP_FLAG=sufficient
RUNDECK_JAAS_LDAP_BINDDN=myrundeckuser#mybrand.mydomain
RUNDECK_JAAS_LDAP_BINDPASSWORD=foo
In my MS Active Directory the structure is:
-mybrand.mydomain
- MYBRAND
- Users
- RundeckRoles
- rundeck-adm (group with my user associated)
After I login returns this screen:
EDIT1:
My rundeck-admin-role.yaml:
description: Admin project level access control. Applies to resources within a specific project.
context:
project: '.*' # all projects
for:
resource:
- equals:
kind: job
allow: [create] # allow create jobs
- equals:
kind: node
allow: [read,create,update,refresh] # allow refresh node sources
- equals:
kind: event
allow: [read,create] # allow read/create events
adhoc:
- allow: [read,run,runAs,kill,killAs] # allow running/killing adhoc jobs
job:
- allow: [create,read,update,delete,run,runAs,kill,killAs] # allow create/read/write/delete/run/kill of all jobs
node:
- allow: [read,run] # allow read/run for nodes
by:
group: rundeck-adm
---
description: Admin Application level access control, applies to creating/deleting projects, admin of user profiles, viewing projects and reading system information.
context:
application: 'rundeck'
for:
resource:
- equals:
kind: project
allow: [create] # allow create of projects
- equals:
kind: system
allow: [read,enable_executions,disable_executions,admin] # allow read of system info, enable/disable all executions
- equals:
kind: system_acl
allow: [read,create,update,delete,admin] # allow modifying system ACL files
- equals:
kind: user
allow: [admin] # allow modify user profiles
project:
- match:
name: '.*'
allow: [read,import,export,configure,delete,admin] # allow full access of all projects or use 'admin'
project_acl:
- match:
name: '.*'
allow: [read,create,update,delete,admin] # allow modifying project-specific ACL files
storage:
- allow: [read,create,update,delete] # allow access for /ssh-key/* storage content
by:
group: rundeck-adm
Someone can help me to find my mistake?
Guys I found the trouble!
It was missing to add some variables RUNDECK_JAAS_LDAP_ROLEMEMBERATTRIBUTE and RUNDECK_JAAS_LDAP_ROLEOBJECTCLASS, by default if you don't declare that, Rundeck assume another values.
After I apply this vars and re-deploy my Rundeck Pod back works my access using my AD Account.
To help the community I'm making available the list of vars that I used in my deployment:
"JVM_MAX_RAM_PERCENTAGE"
"RUNDECK_DATABASE_URL"
"RUNDECK_DATABASE_DRIVER"
"RUNDECK_DATABASE_USERNAME"
"RUNDECK_DATABASE_PASSWORD"
"RUNDECK_LOGGING_AUDIT_ENABLED"
"RUNDECK_JAAS_MODULES_0"
"RUNDECK_JAAS_LDAP_FLAG"
"RUNDECK_JAAS_LDAP_PROVIDERURL"
"RUNDECK_JAAS_LDAP_BINDDN"
"RUNDECK_JAAS_LDAP_BINDPASSWORD"
"RUNDECK_JAAS_LDAP_USERBASEDN"
"RUNDECK_JAAS_LDAP_ROLEBASEDN"
"RUNDECK_GRAILS_URL"
"RUNDECK_SERVER_FORWARDED"
"RUNDECK_JAAS_LDAP_USERRDNATTRIBUTE"
"RUNDECK_JAAS_LDAP_USERIDATTRIBUTE"
"RUNDECK_JAAS_LDAP_ROLEMEMBERATTRIBUTE"
The JAAS plugin that I use was: JettyCombinedLdapLoginModule
Related
We are converting existing k8s services to use istio & knative. The services receive requests from external users as well as from within the cluster. We are trying to setup Istio AuthorizationPolicy to achieve the below requirements:
Certain paths (like docs/healthchecks) should not require any special header or anything and must be accessible from anywhere
Health & metric collection paths required to be accessed by knative must be accisible only by knative controllers
Any request coming from outside the cluster (through knative-serving/knative-ingress-gateway basically) must contain a key header matching a pre-shared key
Any request coming from any service within the cluster can access all the paths
Below is a sample of what I am trying. I am able to get the first 3 requirements working but not the last one...
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: my-svc
namespace: my-ns
spec:
selector:
matchLabels:
serving.knative.dev/service: my-svc
action: "ALLOW"
rules:
- to:
- operation:
methods:
- "GET"
paths:
- "/docs"
- "/openapi.json"
- "/redoc"
- "/rest/v1/healthz"
- to:
- operation:
methods:
- "GET"
paths:
- "/healthz*"
- "/metrics*"
when:
- key: "request.headers[User-Agent]"
values:
- "Knative-Activator-Probe"
- "Go-http-client/1.1"
- to:
- operation:
paths:
- "/rest/v1/myapp*"
when:
- key: "request.headers[my-key]"
values:
- "asjhfhjgdhjsfgjhdgsfjh"
- from:
- source:
namespaces:
- "*"
We have made no changes to the mTLS configuration provided by default by istio-knative setup, so assume that the mtls mode is currently PERMISSIVE.
Details of tech stack involved
AWS EKS - Version 1.21
Knative Serving - Version 1.1 (with Istio
1.11.5)
I'm not an Istio expert, but you might be able to express the last policy based on either the ingress gateway (have one which is listening only on a ClusterIP address), or based on the SourceIP being within the cluster. For the latter, I'd want to test that Istio is using the actual SourceIP and not substituting in the Forwarded header's IP address (a different reasonable configuration).
I am trying to develop more visibility around aws. I'd really like to use the prebuilt dashboards that come with filebeat, but I seem to constantly run into issues with the visualizations for elb and vpcflow logs. My configuration looks like this:
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:9243"
protocol: "https"
username: "kibana_user"
password: "kibana_password"
setup.dashboards.enabled: true
setup.dashboards.directory: ${path.config}/kibana
setup.ilm.enabled: false
output.elasticsearch:
hosts: ["localhost:9200"]
protocol: "https"
username: "elastic_user"
password: "password"
indices:
- index: "cloudtrail-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.cloudtrail"
- index: "elb-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.elb"
- index: "vpc-%{[agent.version]}-%{+yyyy.MM.dd}"
when.contains:
event.dataset: "aws.vpc"
processors:
- add_fields:
target: my_env
fields:
environment: development
In my dashboards directory I changed the filebeat-* index to
vpc-* for Filebeat-aws-vpcflow-overview.json, cloudtrail-* for filebeat-aws-cloudtrail.json and elb-* for Filebeat-aws-elb-overview.json. The cloudtrail dashboard works just fine. I only run into issues with the elb and vpcflow visualizations. None of elb requests visualizations work. The top ip addresses for vpcflow logs do not work either. Here are some screenshots
Any help with this would be greatly appreciated.
For this particular situation if you don't use the deafault filebeaat-* index there are issues getting the prebuilt dashboards to spin up. I dropped the custom indexing that I had in my configuration and I was able to get the dashboards to load properly.
I'm new to Kubernetes, and am playing with eksctl to create an EKS cluster in AWS. Here's my simple manifest file
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.18"
managedNodeGroups:
- name: ng-sandbox
instanceType: r5a.xlarge
privateNetworking: true
desiredCapacity: 2
minSize: 1
maxSize: 4
ssh:
allow: true
publicKeyName: my-ssh-key
fargateProfiles:
- name: fp-default
selectors:
# All workloads in the "default" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: default
# All workloads in the "kube-system" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: kube-system
- name: fp-sandbox
selectors:
# All workloads in the "sandbox" Kubernetes namespace matching the
# following label selectors will be scheduled onto Fargate:
- namespace: sandbox
labels:
env: sandbox
checks: passed
I created 2 roles, EKSClusterRole for cluster management, and EKSWorkerRole for the worker nodes? Where do I use them in the file? I'm looking at eksctl Config file schema page and it's not clear to me where in manifest file to use them.
As you mentioned, it's in the managedNodeGroups docs
managedNodeGroups:
- ...
iam:
instanceRoleARN: my-role-arn
# or
# instanceRoleName: my-role-name
You should also read about
Creating a cluster with Fargate support using a config file
AWS Fargate
I have the following role:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
When I do a kubectl proxy --port 8080 and then try doing
http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/cdp/deployments/{deploymentname}
I get a 200 and everything works fine. However when I do:
http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/cdp/deployments/{deploymentname}/status
I get forbidden and a 403 status back .
I also am able to do get, create, list,watch on deployments with my admin role .
Any idea as to why /status would give forbidden when I clearly have all the necessary permission as admin for my namespace.
You mentioned verbs of the role and you didn't mention resources and apiGroup. Make sure the following are set:
- apiGroups:
- apps
- extensions
resources:
- deployments/status
the status subresource doesn't give you any more information than simply fetching the deployment
The admin role permissions do not let you write deployment status. They let you create and delete the deployment objects, controlling the "spec" portion of the object. Status modification permissions are granted to the deployment controller.
I've been trying to set up a rundeck server but run into several issues when the authentication provided either doesn't provide full access to projects and when I've tried to modify the config files, it then fails to authenticate as shown below.
HTTP ERROR: 500
Problem accessing /user/j_security_check. Reason:
java.io.IOException: Configuration Error:
No such file or directory
My jaas-loginmodule.conf looks like this:
com.dtolabs.rundeck.jetty.jaas.JettyCachingLdapLoginModule sufficient
debug="true"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
providerUrl="ldaps://sb2sys3.derivatives.com"
bindDn="uid=svldap,cn=users,cn=accounts,dc=derivatives,dc=com"
bindPassword="T0wR0pe!"
authenticationMethod="simple"
forceBindingLoginUseRootContextForRoles="true"
forceBindingLogin="true"
userBaseDn="cn=users,cn=accounts,dc=derivatives,dc=com"
userRdnAttribute="uid"
userIdAttribute="uid"
userPasswordAttribute="userPassword"
userObjectClass="inetOrgPerson"
roleBaseDn="cn=groups,cn=accounts,dc=derivatives,dc=com"
roleNameAttribute="cn"
roleMemberAttribute="member"
roleObjectClass="groupOfNames"
cacheDurationMillis="300000"
supplementalRoles="user"
reportStatistics="true";
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
debug="true"
file="/etc/rundeck/realm.properties";
};
I've also editted the realm.properties file to have a user with the role admin, which is also changed in the web.xml.
The current admin.aclpolicy looks like this:
description: Admin, all access.
context:
project: '.*' # all projects
for:
resource:
- allow: '*' # allow read/create all kinds
adhoc:
- allow: '*' # allow read/running/killing adhoc jobs
job:
- allow: '*' # allow read/write/delete/run/kill of all jobs
node:
- allow: '*' # allow read/run for all nodes
by:
group: admin
---
description: Admin, all access.
context:
application: 'rundeck'
for:
resource:
- allow: '*' # allow create of projects
project:
- allow: '*' # allow view/admin of all projects
project_acl:
- allow: '*' # allow admin of all project-level ACL policies
storage:
- allow: '*' # allow read/create/update/delete for all /keys/* storage content
by:
group: admin
The error you are receiving appears to be related to the JAAS_CONF variable.
I managed to reproduce the exact 500 error on a rpm installation with CentOS7.
By commenting out the JAAS_CONF variable from /etc/rundeck/profile and ,if you have set it, /etc/sysconfig/rundeckd or /etc/default/rundeckd, the error shows empty java.io.IOException with “Configuration Error: No such file or directory” so it may be a possibility that a mistype in those files may be affecting the authentication.
I would advise you to perform a complete check in those files in order to verify that everything is in order.
Hope it helps