This question is regarding VM managed identity token caching. Especially when the permission of the managed identity is changed, will the token cached on the VM be invalidated immediately such that the token retrieved after the permission change is most up to date and reflecting the current permission granted to the managed identity. If the invalidation is not automatic, is there a way to force it by user manfully? Is there any document on the supposed behavior?
Some details of our use scenario and testing:
We are using managed identity assigned to a Azure virtual machine either system assigned or user assigned to query Graph API. The access token is retrieved via local IMDS endpoint. We encountered some strange behavior probably related with the access token caching.
Scenario one:
Permission is NOT granted to the VM managed identity
Retrieving the access token and querying the graph caused 403 error, which was expected.
Granted the permission to the managed identity
Repeat step 2, the operation still failed with the same error.
Scenario two:
Permission is granted to the VM managed identity
Retrieving the access token and querying the graph succeeded, which was expected.
Remove the permission from the managed identity
Repeat step 2, the operation still succeeded.
Scenario three:
Permission is granted to the VM system assigned managed identity
Retrieving the access token and querying the graph succeeded, which was expected.
Remove the system assigned managed identity itself from the VM
Repeat step 2, the operation still succeeded.
Only after restarting the VM, we then failed to retrieve the access token from this logically non-existent managed identity via IMDS.
Could anybody shed some light on this question? Thanks.
Tokens are cached in various infrastructure systems, and it can take several hours (up to one day) to see changes reflected in a token.
See Limitation of using managed identities for authorization for details.
Related
we are using KeyCloak (version 16.0.0) as our IAM.
in our code, we are using the Admin Rest Api to handle user login.
in the first login, we are sending the user credentials and using the password grant type to get a valid access token.
but after that, whenever the access token is about to be expired (in our case 5 minutes) we are using the client_credentials grant type to generate a new access token.
I'm not an expert but it seems to me, that unlike working with refresh tokens, this way to generate a new access token can last forever and it's not bound by any value from KC configuration.
Am I right?
I tried to play around with SSO Session Idle/Max and Client Session Idle/Max values (both on realm and client level) but it seems to have no impact.
any insight will be helpful
I'm not an expert but it seems to me, that unlike working with refresh
tokens, this way to generate a new access token can last forever and
it's not bound by any value from KC configuration. Am I right?
What happens if the account of user A gets compromised? According, to your current setup you would keep that user logged in "forever", unless you would "tell" Keycloak to not omitted tokens on behalf of the client that is being used with the client_credentials grant type. But then other unrelated users would suffer. There is additional issues with this design, for example the assignment of roles to users, how would you handle user specify roles in the access token omitted by the client that is being used with the client_credentials grant type?
This seams to me to be a poor man's version of the refresh token mechanism.
in the first login, we are sending the user credentials and using the
password grant type to get a valid access token.
If possible you should use more modern workflows for user login, like for example Authorization Code Flow (i.e., standard flow in Keycloak). Otherwise, just use the direct access grants with a short access token lifespan and renew it using the refresh token mechanism.
New local/ domain user needs to be created using a Web based application.
I have used System.DirectoryServices and System.DirectoryServices.AccountManagement namespaces to attain it.
So if I run the application using Visual Studio (2012) everything goes smooth (User is getting added to the active directory!).
But if I host the website in IIS and try to do the same operation, it throws an error message saying Access is Denied.
I did a research and found out the reason also - It is happening because the APPLICATION POOL Identity is set to an user who has only USER Privileges.
But due to security reasons user with ADMINISTRATIVE Privilege cannot be given as the Application Pool Identity in the application.
So, is there any alternative way to by-pass this Application Pool Identity and allow the user to create local/ domain user successfully.
Waiting for valuable reply/ suggestions.
I'm developing an application that manipulates data in Google Cloud Storage
buckets owned by the user. I would like to set it up so the user can arrange to
grant the application access to only one of his or her buckets, for the sake of
compartmentalization of damage if the app somehow runs amok (or it is
impersonated by a bad actor or whatever).
But I'm more than a bit confused by the documentation around GCS authorization.
The docs on OAuth 2.0 authentication show that there are only three
choices for scopes: read-only, read-write, and full-control. Does this
mean that what I want is impossible, and if I grant access to read/write one
bucket I'm granting access to read/write all of my buckets?
What is extra confusing to me is that I don't understand how this all plays in
with GCS's notion of projects. It seems like I have to create a project to get
a client ID for my app, and the N users also have to create N projects for
their buckets. But then it doesn't seem to matter -- the client ID from project
A can access the buckets from project B. What are project IDs actually for?
So my questions, in summary:
Can I have my installed app request an access token that is good for only a
single bucket?
If not, are there any other ways that developers and/or careful users
typically limit access?
If I can't do this, it means the access token has serious security
implications. But I don't want to have to ask the user to go generate a new one
every time they run the app. What is the typical story for caching the token?
What exactly are project IDs for? Are they relevant to authorization in any
way?
I apologize for the scatter-brained question; it reflects what appears to be
scatter-brained documentation to me. (Or at least documentation that isn't
geared toward the installed application use case.)
I had the same problem as you.
Go to : https://console.developers.google.com
Go to Credentials and create new Client ID
You have to delete the email* in "permissions" of your projet.
And add it manually in the ACL of your bucket.
*= the email of the Service Account. xxxxxxxxxxxx-xxxxxxxxx#developer.gserviceaccount.com
if you are building an app. It's Server to server OAuth.
https://developers.google.com/accounts/docs/OAuth2ServiceAccount
"Can you be clearer about which project I create the client ID on (the developer's project that owns the installed application, or the user's project that own's the bucket)?"
the user's project that own's the bucket
It's the user taht own the bucket who grant access.
It turns out I'm using the wrong OAuth flow if I want to do this. Thanks to Euca
for the inspiration to figure this out.
At the time I asked the question, I was assuming there were multiple projects
involved in the Google Developers Console:
One project for me, the developer, that contained generated credentials for
an "installed application", with the client ID and (supposed) secret baked into
my source code.
One project for each of my users, owning and being billed for a bucket that
they were using the application to access.
Instead of using "installed application" credentials, what I did was switch to
"service account" credentials, generated by the user in the project that owns
their bucket. That allows them to create and download a JSON key file that they
can feed to my application, which then uses the JSON Web Tokens flow of OAuth
2.0 (aka "two-legged OAuth") to obtain authorization. The benefits of this are:
There is no longer a need for me to have my own project, which was a weird
wart in the process.
By default, the service account credentials allow my application to access
only the buckets owned by the project for which they were generated. If the
user has other projects with other buckets, the app cannot access them.
But, the service account has an "email address" just like any other user, and
can be added to the ACLs for any bucket regardless of project, granting
access to that bucket.
About your answer.
Glad you solved your problem.
You can also reduce the access to only ONE bucket of the projet. For example, if you have several buckets and the application does not need access to all.
By default, the service account has FULL access Read, write and ACL of all buckets. I usually limited to the needed bucket.
I am trying to protect a Java servlet with OpenAM + J2EE tomcat agent. I got this part working by using embedded OpenDJ of OpenAM.
Now I am trying to authenticate against a LDAP server, so I added a LDAP module instance for OpenAM, but I get "User has no profile in this organization" when I am trying use uid/password of an user from that LDAP store.
I checked OpenAM administration guide on this the description is rather brief. I am wondering if it is even possible to do this without using the data store configured for OpenAM?
The login process in OpenAM is made of two stages:
Verifying credentials based on the authentication chain and individual authentication module configurations
User profile lookup
By configuring the LDAP authentication module you took care of the authentication part, however the profile lookup fails as you haven't configured the user data store (see data stores tab). Having a configured data store allows you to potentially expose additional user details across your deployment (e.g. include user attributes in SAML assertions or map them to HTTP headers with the agent), so in most of the scenarios having a data store configured is necessary.
In case you still don't want to configure a data store, then you can prevent the user profile lookup failure by going to Access Control -> <realm> -> Authentication -> All Core Settings -> User Profile Mode and set it to Ignore.
This is unrelated to authentication but it's related to authorization ... you have to configure appropriate policies ... see OpenAM docs.
Agents will enforce authorization, OpenAM determines if the user has the permission to access a protected resource.
As Bernhard has indicated authentication is only part of the process of granting access to a user. He is referring to using a Policy to control access.
Another method is to check if the authenticated user is a member of the desired group programmatically. This can be useful when you want access control over resources that OpenAM doesn't know about (e.g. specific data).
For example, lets say that you want different groups to have access to different rows in a table in a database. You can retrieve the group information associated with the user and add that to your database query, thus restricting the data returned.
I'm sure that you could do this with OpenAM as well using custom modules to allow the policy to use information in the database as resource, but I've found it is much simpler to perform this fine grained access control in your code, and is in all likelihood significantly faster.
I'm working on federating an application with various areas and extremely fine-grained permissions. Each of the various areas has a federated WCF endpoint to communicate back to the server. Because of the fine grained permissions, a single token containing all of the permissions can be as large as 1MB, maybe more.
Requirements dictate that the user's username and password credentials must not be held within our code base after the initial log in process. The permissions cannot be combined to create a smaller set. We are using the Thinktecture.IdentityServer for our STS implementation.
My proposed solution is to break each endpoint into its own realm in the STS, and the STS will return a token with the permission claims specified for the realm. To accomplish this I would like to have an Auth realm which is authenticated by username/password and returns a token containing a user, tenant, and subgroup IDs which could then be used as credentials for authenticating to other realms.
Setting up the STS to issue tokens specific to realms has already been implemented. The only requirement remaining is that the username/password is not kept around within our code base.
Is it possible to configure the STS to allow authentication by providing a previously issued token from a specific realm? Is there a better solution which I have not come upon?
Yes, you can authenticate to STS A using a token issued by STS B. STS A has to be configured to trust STS B as a known identity provider.
With thinktecture STS I think you can do this by configuring a new WSStar identity provider. If one realm STS adds the other realm STS as an identity provider, it should begin accepting tokens issued from that realm+certificate.
For WCF, a reasonably painless way to set up issued token channels is with the WIF CreateChannelWithIssuedToken extension method:
http://msdn.microsoft.com/en-us/library/ee517268.aspx
1MB is a very big token indeed. There may be other good reasons to split into multiple STSes in separate realms, but you might alternatively help to solve the problem by dynamically deriving permissions through a policy or permissions stores on the relying party side where your token gets consumed rather than pre-calculating all the granular permissions from the STS side. But I say this without knowing your specific application so feel free to tell me to go away :)
What you really want is re-newing an expired token. We don't support that. And also don't have plans to do that.
You could set the expiration time to a value that works for you - and then force re-login after that.
1 MB tokens are not a good idea - you either need to roundtrip that, or create session affinity. Tokens are meant to describe user identity, not to dump every possible value into them.
Why doesn't the RP load the authZ rules from the IdP via a service call?