When a client sends a request to the Kubernetes apiserver, authentication plugins attempt to associate a number of attributes to the request. These attributes can be used by authorisation plugins to determine whether the client's request can proceed.
One such attribute is the UID of the client, however Kubernetes does not review the UID attribute during authorisation. If this is the case, how is the UID attribute used?
The UID field is intentionally not used for authentication purposes, but it is to allow logging for audit purposes.
For many organizations this might not be important, but for example Google allows employees to change their usernames (but of course not the numeric UID). Logging the UID would allow lookups of actions regardless of the current username.
(Now some might point out, that changing the username will likely involve loosing the current privileges; this is an accepted limitation/inconvinience.)
Related
What are the main differences between Hashicorp-Vault AppRole Auth Method and Userpass Auth Method?
In the documentation I see that approle is intended to be used mostly by machines or apps and userpass is for users.
The obvious are a slightly different API and some different naming:
role_id and secret_id for approle
username and password for userpass
What are the other key differences in terms of security, performance etc.?
Another main difference is the workflow, because of the target audience. Let me unapck this.
userpass is made for human users. approle is made for services/machines/scripts.
A main difference, caused by this difference of workflow, lies in how you rotate your secrets.
With userpass, each username has a single password. When you change that single password, it’s changed immediately and the previous one is revoked.
Approle works more like traditional API keys (or AWS access keys if you’re familiar with them, and why AWS let you have 2 different keys): for each role-id, you can create multiple secret-id. This becomes very useful when you need to distribute these secrets to multiple instances, and also to rotate them: the creation of a new secret-id and the revocation of the previous one are decoupled.
Another useful difference to remember is that, although both userpass and approle let you set num_uses or IP restrictions on the generated token, approle also lets you set validation constraints on the secret_id: the secret_id can be set to be only valid from specific IPs, only a specific number of times, or have a TTL (this TTL is on the secret-id, not on the token, which can have another TTL).
These secret-id restrictions give you more control on how you can secure your authentication data distribution (wrapping is indeed really nice, but these other settings give you a lot of options to make it work in your workflow, depending on the service you’re trying to secure).
I think a main difference is the ability to use AppRole with secret_id unwrapping for secure introduction. This means that the final auth credentials will never be fully known by your application build and delivery pipeline(s), but the application itself.
Should also be noted that secret_id is dynamic, so each application instance using the same role_id will effectively use a different "password". The secret_id itself can also be limited in the number of times it can be used.
I suggest checking out https://learn.hashicorp.com/tutorials/vault/approle#response-wrap-the-secretid
The requirement is to find user password expiration time.
Now in ldap, you enforce expiration through password policy.
The password policy attribute pwdMaxAge specifies after how many seconds from the time the password was changed does the password expire.
ldap password policy
The moment you change/create user password, the operational attribute pwdChangedTime gets added with the timestamp.
Sadly, ldap does not add any operational attribute for the expiration time, it's something we need to calculate, by doing a pwdChangedTime + pwdMaxAge < current_time
In your mods-enabled/ldap file you can fetch the pwdChangedTime attribute. Cool! But how do I fetch pwdMaxAge attribute. This file only has structure for users, groups, profiles, clients but not for the password policy. raddb mods-available details here.
(I can do this programmatically, by writing code/script for fetching these attributes using cli and then doing my manipulation, but is it possible doing this through the config? Coz, if you look at it, this expiration time is something related to user attribute and there should be a way to return it along with bare minimum user data like name and organization that we return)
Thanks!
There is no such operational attribute pwdMaxAge in the user's entry.
The password expiry warning during checking the password is returned by the server in a response control if the client sends the bind request with the appropriate request control (see draft-behera-ldap-password-policy, section 6.1 and 6.2).
This means that the LDAP client (FreeRADIUS in your case) has to support this. Furthermore all intermediate components (RADIUS server, Wifi access point, etc.) have to propely handle the response and return some useful information up the chain to the user. In practice this does not really work.
Therefore I'd recommend to send password expiry warning via e-mail. There are ready-to-use scripts out there like checkLdapPwdExpiration.sh provided by LDAP Tool Box project.
I have a question about RESTful APIs and security in a multi-tenant environment.
Imagine you have an endpoint: api/branches/:branchId/accounts/:accountId
Authentication is done through Bearer Tokens (oauth2). Each token includes a set of claims associated to the invoking user. A branchId claim is included in the token, and each user belongs to a single branch.
The security restrictions are the following:
The branchId of the GET request should match the one stored on the token claim.
accountId should be a valid account inside the branch identified by branchId.
The question is: which of the following solutions is correct?
Maintain the endpoint: api/branches/:branchId/accounts/:accountId. And do the required security checks.
Change the endpoint to: api/accounts/:accountId, obtain the branchId from the token, and then do the remaining security checks.
The application is meant to be multi-tenant. Each branch is a tenant, and each user may only access the information associated with its single branch.
Thanks!
I needed to make a decision fast, so I will be using solution 1. If anybody has an argument against or in favor please join the conversation.
Arguments in favor:
I totally agree with this answer: https://stackoverflow.com/a/13764490/2795999, using the full URL allows you to more efficiently decide which data store to connect with, and distribute load accordingly.
In addition you can easily implement caching, and logging because the full url is descriptive enough.
Independency of security and API. Today I am using OAuth2 but perhaps tomorrow I can send the request signature, and because the URL has all the information to fulfill the request it will work.
Arguments against:
Information redundancy: the branchId is on the URL and encrypted on the token.
A little more effort to implement.
I am thinking for ways to implement a mechanism which enables a user to vote,without logging any of his details. Each user has a set of attributes that enable him to vote. For eg. Id,name,email-id.
Using these attributes we must guarantee that the user can vote for the first time. During this time,complete anonymity is guaranteed.
But if the user comes for a second time to vote,he should not be allowed to vote. Is this remotely possible?We are not storing any of the information related to the user.No ip adddress,email-id or student id. They are just used as a means of authentication.
I read many research papers for this but not able to find anything specific.
a mechanism which enables a user to vote,without logging any of his
details
Sure you can. Just don't log anything. But you do need to store information about which user has voted. You actually need info of the user not even the machine the user used as the user could vote from another machine.
We are currently looking into implementing our own STS (Microsoft WIF) for authenticating our users, and in that process we have come up with a few questions that we haven’t been able to answer.
We have different kinds of users, using different kinds of applications. Each kind of user needs some special types of claims only relevant for that kind of users and the application belonging.
Note that we do not control all the clients.
Let’s say that all users are authorized using simple https using username and password (.NET MVC3). A user is uniquely identified by its type, username and password (not username and password alone). So I will need to create an endpoint for each user type, to be able to differentiate between them. When a user authorize, I’ll issue a token containing a claim representing the user type. Is there an easier way for doing this? Can I avoid an endpoint for each user type (currently there are three)?
My token service can then examine the authorized users’ token and transform the claims, issuing a token containing all the users’ type specific claims. So far so good, except for the multiple endpoints I think?
If I need to have multiple endpoints, should I expose different metadata documents as well, one for each endpoint? Having one big metadata document containing a description of all claims, doesn’t make any sense since there is no application that needs all claims.
Update
Some clarifications.
Certain applications are only used by certain types of users. Not one application can be used by multiple user types.
Depending on what type of application the request is coming from, username and passwords needs to be compared for that user type. There are user stores for each type of application. That is why I need to know what application type the request is coming from. I can't resolve the type by the username and password alone.
Based on your problem description, it sounds like you have three indepent user "repositories" (one for each user type).
So imho this would be a valid scenario for three STS or a STS with multiple endpoints.
Another way to solve this could be to distinguish the user type by the indentifier of the replying party redirecting the user to the sts. This identifier is submitted in the wtrealm parameter.
The processing sequence could look like the following:
Get configuration for relying party (wtrealm) from configuration store (I'd suggest a database for your rather complex case)
Validate user with username, password and user type (from relying party configuration)
Add claims depending on user type or relying party specific configuration.
The datasbase/class structure for this could look similiar to this:
Need some more information to answer:
Are certain applications only used by certain types of users? Or can any user type access any application? If the former, you can configure the STS for that application to pass that user type as a claim. Each application can be configured to have its own subset of claims.
Where is the user type derived from? If from a repository, could you not simply construct a claim for it?
Update:
#Peter's solution should work.
Wrt. the 3 STS vs. 3 endpoints,
Many STS - can use same standard endpoint with different "code-behind". Would still work if you migrated to an out-the box solution . Extra work for certificate renewals.
One STS - custom endpoints won't be able to be migrated. Only one STS to update for certificate renewals.
Metadata - given that it can be generated dynamically, doesn't really matter. Refer Generating Federation Metadata Dynamically.