How can I use the Social Tables API without authorizing every time - social-tables

I have a server-based application that needs to add guest lists, but, I don't see any method to do this without performing a /oauth/authorize every time, as the code expires almost immediately after granting access.

Related

Is it bad practise to automatically use the refresh token in an interval?

As I am working on implementing a proper auth flow into a react web app, I am presented with different patterns of how to use access and refresh tokens.
I am considering the following two patterns:
Creating some sort of middleware to the fetch API:
This middleware runs before every request to the backend and checks whether the access token is still valid or not.
If it is invalid, it first calls the auth server to fetch a new access (and refresh) token.
Creating an interval which is independent from all other logic to keep the access token alive.
Say if the access token is valid for 5 minutes, the interval will run every 5 minutes to fetch a new access token
I would also make sure it only runs every five minutes, if the user is still active , so that the application left open without any user interaction for a long time will automatically log out
Any API call simply uses the currently active access token and does not need to worry about checking the token first or anything
The second approach seems much much easier and cleaner to implement for me, since it does not add any complexity to fetching data and is completely independent/seperate to the app otherwise.
I've been having a hard time to research this question though tbh. I'm not sure if there is some security issue I'm missing with that approach.
So my questions are:
Is there any security issue with fetching a new access token in an interval from the clients side?
Is there a common practise on how SPA apps (like the react app I mentioned) handle access tokens?
If yes, what is that common practise?
If there is no security issue, are there other cons of the second approach that I am missing out on?
Thank you for your answers in advance!
I think the answer depends, if you always do it every X minutes, and you have many active clients, it might create more load on the backend, compared do doing it on a need basis. Perhaps all clients are not so active all the time?
One thing to look out for is to make sure you don't trigger multiple requests at the same time to request new refresh tokens. If you get a race condition here, then you might be logged out (if you use one-time refresh tokens)
Also it is worth considering to use the BFF pattern, do watch this video
Using the BFF pattern to secure SPA and Blazor Applications - Dominick Baier - NDC Oslo 2021

How to implement a one time authentication mechanism?

I'm trying to create a website to authenticate users through the use of a throwaway password where the assumption is that the user might not use the website again (basically a one time access).
I have done my research on OTP and various solutions to authentication but these don't seem to fit my requirements, most of them seem to rely on users having login credentials to the website whereas my system would allow them access without the need for registering.
The implementation of passwordless authentication by Auth0 seems to fit what you're describing. Even if you were not considering a third-party provider it may be useful to go through the documentation.
Basically, a user can login to a site without any need for a sign-up process. They can do so just by requesting that a one time code is delivered to them, for example, either by email or SMS.
This way, they can get quick access without having to setup a user and in the event that they do come back your application can recognize this because they will most likely be using the same mechanism, that is, you can use the email or mobile phone as the unique identifier.
Disclosure: I'm an Auth0 engineer.
If you do not require your users to register, why do you need authentication at all?
Why not just set a cookie with an unique identifier on the first visit? You can store data at the server side associated with that identifier. Keep track of when you last saw the user, and if they do not return within a certain period, you can delete any data you stored for that user.

reject queries based on specific client_app_name and nt_username

With a surge of applications that can be used to pull information, my sql server is constantly getting tapped, and there are a couple of users that keeps running refresh. Is there a way to reject query based on specific client_app_name and nt_username?
Alternatively, is there a way to add the combination of the user and the app to security to decline access to SQL? i.e. Approve the user access if client_appname is excel but decline if the appname is 'Mashup Engine'.
What you really need is resource governance. With it you can restrict the resources a user can consume. This way the users can refresh as much as they like, but they won't be able to consume the server resources, their queries will instead slow down as they are exhausting the allowed resources. Other users will still be able to run queries at full speed.
The assignment of users to resource groups ('pools') is based on a classification function run at login time, and this function can consider user name, application name, workstation name, client IP etc.

Architecture for fast globally distributed user quota management

We have build a free globally distributed mobility analytics REST API. Meaning we have servers all over the world which run different versions (USA, Europe, etc..) of the same application. The services are behind a load balancer so I can't guarantee that the same user always get's the same application/server if he/she does requests today or tomorrow. The API is public but users have to provide an API key in order for us to match them to their paid request quota.
Since we do heavy number crunching with every request, we want to minimize request times as far as possible, inparticular for authentication/authorization and quota monitoring. Since we currently only use one user database (which has to be located in a single data center) there are cases where users in the US make a request to an application/server in the US which authenticates the user in Europe. So we are looking for a solution where the user database interaction:
happens on the same application server
get's synchronized between all application servers
should be easily integrable into java application
should be fast (changes happen in every request)
Things we have done so far:
a single database on each server > not synchronized, nightmare
a single database for all servers > ok, when used with slave as fallback but American users have to authenticate over the Atlantic
started installing bdr but failed on the way (no time, too complex, hard to make transition)
looked at redis.io
Since this is my first globally distributed REST API I wonder how other companies do this. (yelp, Google, etc.)
Any feedback is very kindly appreciated,
Cheers,
Daniel
There is no right answer, there are several ways to perform this. I'll describe the one I'm most familiar with (and which probably is one of the simplest ways to do it, although probably not the most robust one).
1. Separate authentication and authorization
First of all separate authentication and authorization. An user authenticating across the Atlantic is fine, every user request that requires authorization to go across the Atlantic is not. Main user credentials (e.g. a password hash) are a centralized resource, there is no way around that. But you do not need the main credentials every time a request needs to be authorized.
This is how a company I worked at did it (although this was postgres/python/django, nothing to do with java):
Each server has a cache of database queries in memcached but it does not cache user authentication (memcached is very similar to redis, which you mention).
The authentication is performed in the main data centre, always.
A successful authentication produces a user session that expires after a day.
The session is cached.
This may produce a situation in which a user can have an action authorized by more than one session at a given time. The algorithm is that: if at least one user session has not expired, the user is authorized. Since the cache is local, there is no need for a costly operation of fetching data across the Atlantic for every request.
2. Session revival
Sessions expiring after exactly a day may be annoying for the user, since he can never be sure that his session will not expire in the next couple of seconds. Each request the user makes that is authorized by a session shall extend the session lifetime to a full day again.
This is easy to implement: You need to keep in the session the timestamp of the last request made. And count the session lifetime based on that timestamp. If more than a single session may authorize a request the algorithm shall select the younger session (with the longest lifetime remaining) to update.
3. Request logging
This is as far as the live system I worked with went. The remainder of the answer is about how I would extend such a system to make space for logging the requests and verifying request quota.
Assumptions
Lets start by making a couple of design assumptions and argue why they are good assumptions. You're now running a distributed system, since not all data on the system is in a single place. A distributed system shall give priority to response speed and be as horizontal (not hierarchical) as possible, to achieve this consistency is sacrificed.
The session mechanism above already sacrifices some consistency. For example, if a user logs in and talks continuously to server A for a day and half and then a load balancer points the user to server B, the user may be surprised that he needs to login again. That is an unlikely situation: in most cases a load balancer would divide the user's requests equally over both server A and server B over the course of that day and half. And both, server A and server B will have live sessions at that point.
In your question you wonder how google deals with request counting. And I'll argue here that it deals with it in an inconsistent way. By that, I mean the fact that you cannot enforce a strict upper limit of requests if you're sacrificing consistency. Companies like google or yelp simply say:
"You have 12000 requests per month but if you do more than that you will pay $0.05 for every 10 requests above that".
That allows for an easier design: you can count requests at any time after they happened, the counting does not need to happen in real time.
One last assumption: a distributed system will have problems with duplicated internal data. This happens because parts of the system are running in real time and parts are doing batch processing without stopping or timestamping the real time system. That happens because you cannot be 100% sure about the state of the real time system at any given point. It is mandatory that every request coming from a customer have a unique identifier of some sort, it can be as simple as customer number + sequence number, but it needs to exist in every request. You may also add such a unique identifier the moment you receive the request.
Design
Now we extend our user session that is cached on every server (often cached in a different state on different servers that are unaware of each other). For every customer request we store the request's unique identifier as part of the session cache. That's right, the request counter in the central database is not updated in real time.
At a point in time (end of day processing, for example) each server performs a batch processing of request identifiers:
Live sessions are duplicated, and the copies are expired.
All request identifiers in expired sessions are concatenated together, and the central database is written to.
All expired sessions are purged.
This produces a race condition, where a live session may receive requests whilst the server is talking to the central database. For that reason we do not purge request identifiers from live sessions. This, in turn, causes a problem where a request identifier may be logged to the central database twice. And it is for that reason that we need the identifiers to be unique, the central database shall ignore request updates with identifiers already logged.
Advantages
99.9% Uptime, the batch processing does not interrupt the real time system.
Reduced writes to the database, and reduced communication with the database in general.
Easy horizontal growth.
Disadvantages
If a server goes down, recovering the requests performed may be tricky.
There is no way to stop a customer from doing more requests than he is allowed to (that's a peculiarity of distributed systems).
Need to store unique identifiers for all requests, a counter is not enough to measure the number of requests.
Invoicing the user does not change, you just query the central database and see how many requests a customer performed.

What defines a client/user pair for Google API refresh tokens?

According to Google, there is a limit (currently 25) of how many refresh tokens can be given per client/user pair.
Just to clarify, this is referring to each user, right? Meaning that if I have a million users (!) each user could have 25 refresh tokens active? Or does this mean that I only 25 of the one million users are able to store refresh tokens on my server?
I am referring to the bottom of this page:
https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtAuthorization#helpme
Ok trying to figure out how to explain this:
When a user says yes they will allow your application to access there data you get a refresh token. You should save this refresh token some place so that you can use it next time. Then you will never have to ask the user to authenticate you again.
But if for some reason you ask the user again can I access your data you will get another refresh token. The first refresh token is still good you can still use that to access there data. You can do this up to 25 times before the first one gets deleted.
Here is a real life example of when this can be a problem:
I have an SSIS connection manager that asks the user if i can access there Google Analytics data. (works with a datareader but i digress). I have run into a problem where the user has to many packages authenticated. Basically they installed my application to many times in testing and the first one stopped working.
In the end i just recommended that they have a dedicated account for using my Task that way they would reduce the change of hitting the 25 authentications.