Is there a way to override the exp property on access tokens in Amazon Cognito? - amazon-cognito

I have a requirement to be able to specify session timeouts on a per user basis. (So that it may be a different value for each user) It seems natural to use the 'exp' property on the access token to accomplish this, (as that it's purpose in the oauth spec), but cognito seems to ignore updates to this in the preTokenGeneration trigger. Is there a way to update this on a per user basis? Or do I really need to define some custom attribute that will be checked on the Id token?

Great question. I'm sure you know that since August 2020 Cognito allows you to configure access token expiry time from 5 mins to 1 day. The configuration is per app client. If you were able to split your users across app clients that could be an option (e.g. admins with long sessions login on one page, normal users on another). You could lock the app clients down to certain users using a pre-authentication trigger. That's not a very configurable solution though.
I also wonder what you mean exactly by a session? For example, this would typically mean one of two things. Either your session expires and you have to login again after a fixed length of time (e.g. AWS is 24 hours). Or if you are idle for a certain amount of time (say 30 mins) your session is ended. Could you elaborate on your requirement a bit?

Related

Maintain Concurrent Grants in Single OIDC Session

I am using node-oidc-provider library as an OIDC based interface to my auth-service, which eventually does SAML or OIDC based federation with the client. I have a scenario where user can perform e-sign after login.
During e-sign, user needs to re-authenticate him/her-self, and at this time library creates a whole new session with a new grant.
I want this operation with-in the primary login session having limited grant with very short expiry instead of creating a new session.
What could be the best way of achieving this, Have you worked on a similar requirement?
Node-oidc-provider can only have single grant per session which seems to me quite a limitation.
Please HELP! Thanks in advance.
I tried couple of things but seemed to be a hacky approach instead of having something which is close to a standard.
I would consider the following options:
USER SIGN IN
Initial redirect uses scope=openid say. A grant is created, with a 4 hour refresh token and 15 minute access token. It may involve consent.
HIGH PRIVILEGE REDIRECT
Second redirect uses scope=openid payment. Another 4 hour refresh token and short lived access token are created. This replaces the grant, which is pretty standard, but you don't want the payment scope to hang around for long.
SCOPE TIME TO LIVE
The payment scope is assigned a short time to live, of 10 minutes say. When the access token expires, the payment scope is not issued on the next token refresh. Most OIDV providers probably don't support this though.
ALTERNATIVE
The next time the high privilege scope expires, don't refresh it. Instead, just do another redirect with scope=openid. This will usually be an SSO event, so usability is not too bad.

XERO API Oauth 2.0 user authentication

For the past few years I have been using an unattended remote server to process invoices through the XERO API (Oauth 1.0)
Periodically (every financial year) we create a new XERO organisation to keep things tidy and avoid slow down.
I have just come to create a new organisation and associated app but have found that I can only use Oauth 2.0, which I do not have a massive issue with, BUT the fact I have to 'user' authenticate is going to be a real problem as my process is 'unattended' and started via CRON jobs.
Can anyone tell me if there is a way around this? and if not are there any solutions to do this?
Alternatively is there any way I can change one of my existing Oauth 1.0 apps to point to a different organisation (i.e. the new one I have just created)?
It seems a little short sighted not to have considered unattended processes, I cannot be the only person doing this??
Any help or pointers would be greatly appreciated.
Thanks,
Mike.
Yes you are not the only one doing this :) & yes, private apps are essentially deprecated end of 2020 - the move was not taken lightly. Since every API action through Xero's api is on behalf of a user account the team decided to move towards OAuth2.0 (Industry Standard) with a user consent screen.
If you need these long standing api connections on behalf of a user - they will need to initially authenticate that API connection at least a single time to get you an access_token and `refresh_tokenà. Access tokens are valid for 30 minutes, while the refresh token is good for 60 days.. So as long as you refresh > 2 months you can persist that longtail process.
If you don't have the means to build out that initial authentication screen to have your user validate on their own, you can use this CLI tool to get your initial token set to securely store in your remote server. An additional change is that that (or some) process will need to ensure the token is refreshed before use, and has given permissions to connect to a specific user's tenant-id as they might be a part of multiple Xero orgs & that manual consent screen is where a user selects which tenant/org they are giving API permissions to.
CLI to get Xero tokens from the command line
https://github.com/XeroAPI/xoauth
More info here: https://community.xero.com/developer/discussion/109207632#answer110970761
UPDATE
client_credentials aka machine to machine are coming to Xero's OAuth2.0 gateway.
You can read more about it here:
https://developer.xero.com/announcements/custom-integrations-are-coming/

Architecture for fast globally distributed user quota management

We have build a free globally distributed mobility analytics REST API. Meaning we have servers all over the world which run different versions (USA, Europe, etc..) of the same application. The services are behind a load balancer so I can't guarantee that the same user always get's the same application/server if he/she does requests today or tomorrow. The API is public but users have to provide an API key in order for us to match them to their paid request quota.
Since we do heavy number crunching with every request, we want to minimize request times as far as possible, inparticular for authentication/authorization and quota monitoring. Since we currently only use one user database (which has to be located in a single data center) there are cases where users in the US make a request to an application/server in the US which authenticates the user in Europe. So we are looking for a solution where the user database interaction:
happens on the same application server
get's synchronized between all application servers
should be easily integrable into java application
should be fast (changes happen in every request)
Things we have done so far:
a single database on each server > not synchronized, nightmare
a single database for all servers > ok, when used with slave as fallback but American users have to authenticate over the Atlantic
started installing bdr but failed on the way (no time, too complex, hard to make transition)
looked at redis.io
Since this is my first globally distributed REST API I wonder how other companies do this. (yelp, Google, etc.)
Any feedback is very kindly appreciated,
Cheers,
Daniel
There is no right answer, there are several ways to perform this. I'll describe the one I'm most familiar with (and which probably is one of the simplest ways to do it, although probably not the most robust one).
1. Separate authentication and authorization
First of all separate authentication and authorization. An user authenticating across the Atlantic is fine, every user request that requires authorization to go across the Atlantic is not. Main user credentials (e.g. a password hash) are a centralized resource, there is no way around that. But you do not need the main credentials every time a request needs to be authorized.
This is how a company I worked at did it (although this was postgres/python/django, nothing to do with java):
Each server has a cache of database queries in memcached but it does not cache user authentication (memcached is very similar to redis, which you mention).
The authentication is performed in the main data centre, always.
A successful authentication produces a user session that expires after a day.
The session is cached.
This may produce a situation in which a user can have an action authorized by more than one session at a given time. The algorithm is that: if at least one user session has not expired, the user is authorized. Since the cache is local, there is no need for a costly operation of fetching data across the Atlantic for every request.
2. Session revival
Sessions expiring after exactly a day may be annoying for the user, since he can never be sure that his session will not expire in the next couple of seconds. Each request the user makes that is authorized by a session shall extend the session lifetime to a full day again.
This is easy to implement: You need to keep in the session the timestamp of the last request made. And count the session lifetime based on that timestamp. If more than a single session may authorize a request the algorithm shall select the younger session (with the longest lifetime remaining) to update.
3. Request logging
This is as far as the live system I worked with went. The remainder of the answer is about how I would extend such a system to make space for logging the requests and verifying request quota.
Assumptions
Lets start by making a couple of design assumptions and argue why they are good assumptions. You're now running a distributed system, since not all data on the system is in a single place. A distributed system shall give priority to response speed and be as horizontal (not hierarchical) as possible, to achieve this consistency is sacrificed.
The session mechanism above already sacrifices some consistency. For example, if a user logs in and talks continuously to server A for a day and half and then a load balancer points the user to server B, the user may be surprised that he needs to login again. That is an unlikely situation: in most cases a load balancer would divide the user's requests equally over both server A and server B over the course of that day and half. And both, server A and server B will have live sessions at that point.
In your question you wonder how google deals with request counting. And I'll argue here that it deals with it in an inconsistent way. By that, I mean the fact that you cannot enforce a strict upper limit of requests if you're sacrificing consistency. Companies like google or yelp simply say:
"You have 12000 requests per month but if you do more than that you will pay $0.05 for every 10 requests above that".
That allows for an easier design: you can count requests at any time after they happened, the counting does not need to happen in real time.
One last assumption: a distributed system will have problems with duplicated internal data. This happens because parts of the system are running in real time and parts are doing batch processing without stopping or timestamping the real time system. That happens because you cannot be 100% sure about the state of the real time system at any given point. It is mandatory that every request coming from a customer have a unique identifier of some sort, it can be as simple as customer number + sequence number, but it needs to exist in every request. You may also add such a unique identifier the moment you receive the request.
Design
Now we extend our user session that is cached on every server (often cached in a different state on different servers that are unaware of each other). For every customer request we store the request's unique identifier as part of the session cache. That's right, the request counter in the central database is not updated in real time.
At a point in time (end of day processing, for example) each server performs a batch processing of request identifiers:
Live sessions are duplicated, and the copies are expired.
All request identifiers in expired sessions are concatenated together, and the central database is written to.
All expired sessions are purged.
This produces a race condition, where a live session may receive requests whilst the server is talking to the central database. For that reason we do not purge request identifiers from live sessions. This, in turn, causes a problem where a request identifier may be logged to the central database twice. And it is for that reason that we need the identifiers to be unique, the central database shall ignore request updates with identifiers already logged.
Advantages
99.9% Uptime, the batch processing does not interrupt the real time system.
Reduced writes to the database, and reduced communication with the database in general.
Easy horizontal growth.
Disadvantages
If a server goes down, recovering the requests performed may be tricky.
There is no way to stop a customer from doing more requests than he is allowed to (that's a peculiarity of distributed systems).
Need to store unique identifiers for all requests, a counter is not enough to measure the number of requests.
Invoicing the user does not change, you just query the central database and see how many requests a customer performed.

How to check User Online Status with Spring Security in Grails?

We use Grails Spring Security in our application to perform user authentication. If a user loggs in to our application, the rememberMe cookie will be saved. This means, that the user will remain logend in between browser sessions.
How can I check which users are currently online? I read that you can retrieve this information from the Session using SessionRegistryImpl or HttpSessionListener but I have no idea how to implement that. I found this post but I am not sure how to transform it to Grails: Online users with Spring Security
Any idea?
I built a dating application that relies on online users. I created a Grails' Service that keeps track of online users. All you have to do is to create a service that keeps a concurrent hash map, the service is singleton so only one instance for the whole web application. When your user log-in for the first time, you set the user id, and a future time in the hash map. For example:
Key = UserID
Value = Now + 30min
So when the user logs in, you increment his log-in time with 30min and insert in the hash map. For every request the user sends, you update the value of the hash map by looking up his expiry time using his user id. So now if the user closes the browser, he is not going to update the expiry and his online status is going to be invalid. You can have a job that runs every 30min and remove those keys which have an expiry date of less than NOW. Or if you wanna count the online users, just loop through the map and count the entries which their expiry date is greater than NOW.
It's a hash map in memory, very easy to access and manipulate, and it's fast. That works great for me, and since I'm using a concurrent hash map, updates and reads are safe. Hope this helps you get what you want. Sorry that my answer is late but I just saw this question :)

Allow to login only one user at time

In our system one client may have multiple operators. However there is a "wish" from client.
One company has an account, however there can be mulitple operators assigned to this company. Client wants us to prepare a solution that only one operator from company can log in to the system at same time. How can I achieve this?
Just by making sure they system has the ability to validate the login on each request. Either
Actively (by querying state -- possibly a database to compare some secrets) or
Passively -- using some form of cryptography and tokens (possibly in the cookie).
Option one is easiest, option 2 is fastest. If you validate on each request you can make sure that only one user remains logged in -- if another user signs in you can invalidate the existing active login -- perhaps with a cooldown period of n amount minutes.
You have to develop some form of login scheme -- kerberos is the defacto scheme -- read this easy to follow tutorial on kerberos Designing an Authentication System: a Dialogue in Four Scenes It should show you what you really need to do.
You could use a database field to flag that they are logged in. Update the field to 'logged in' when they do so, and then update it to 'logged out' when they log out.
You'd also need to monitor login sessions for expiry to update the field if a user never bothered to explicitly logout.
The best approach I've used:
Create a table used to track whether an operator is logged in (e.g. userid and last_accessed_dt)
On each page request by the operator update the last requested date/time
When an operator attempts to login they can only do so if the last requested data/time > timeout period of sessions on your website (E.g. 30 minutes) or if they are the Last Operator User ID ... this way they can quickly recover from a logoff etc.
When an operator logs off have the Last Accessed cleared
When the session times out have the Last Accessed cleared
"I am using WPF application and the server is written in WCF, however this can be achieved. But what in situation when user has an application opened and was inactive for 30min?"
This system is going to be single-user, so I suggest you start a counter thread when a user logs in. When counter reaches 30 minutes, write a value to the db indicating that user has timed out and other users are free to login. Obviously, you should do the same thing when user explicitly logs out.