How to block multiple sessions on single account - fusionauth

I'd like to better understand the role of and techniques used in FusionAuth to manage or limit multiple sessions attempted for a single user's account in our app.
Multiple sessions could be launched from a single laptop using one or more browsers. A single IP address would be linked to this batch of session requests.
Multiple sessions could be launched from multiple devices on a network segment - which may or may not present as multiple IP addresses.
And then we have the possibility of multiple sessions from many different networks.
I'm unaware of what best practices are for this. Our app is essentially stateless / session-less in the backend. FusionAuth doesn't seem to reject multiple session requests but it might be best positioned to do exactly this.
If we want to limit the active session count for each user - does this need to be handled in some middle layer that sits above FusionAuth?

FusionAuth does not currently limit the number of active sessions a user may have in your application or in our SSO.
There are many potential types of limits that could be put in place, such as limiting by IP, device, geographic location, session count, etc that these types of limits may be best solved by a Web Application Firewall (WAF) or some other specialized network security product.

Related

Secure ASP.NET Core against credential stuffing

ASP.Net Core has protection against brute force guessing of passwords, by locking the account after a fixed number of login attempts.
But is there some protection against credential stuffing, where the attacker tries a lot of logins, but always with different usernames? Locking the account would not help, since the account changes on every attempt.
But maybe there is a way to lock an IP against multiple login-attempts or some other good idea to prevent credential stuffing?
I'd recommend using velocity checks with redis, this is basically just throttling certain IPs. Also some fraudsters will also rotate IPs, you could also detect when logins are happening more frequently (say 10x the norm) and start to block all logins for that short window. I wrote a blog post detailing some of the above. The code is all in node, but I did give some high level examples of how we stop fraud at Precognitive (my current gig). I will continue to build upon the code over the next couple of months as I post more in my Account Takeover series.
https://medium.com/precognitive/a-naive-demo-on-how-to-stop-credential-stuffing-attacks-2c8b8111286a
IP throttling is a good first step, but unfortunately it won't block a large number of modern credential stuffing attacks. Tactics for savvy credential stuffers have evolved over the past few years to become more and more complex.
To be really effective at blocking credential stuffing, you need to be looking at more than IP. Attackers will rotate IP and User-Agent, and they'll also spoof the User-Agent value. An effective defense strategy identifies and blocks attacks based on real-time analysis of out-of-norm IP and User-Agent activity, plus additional components to enhance specificity (such as browser-based or mobile-app-based fingerprinting).
I wrote a blog post looking at two credential stuffing attacks in 2020, which I call Attack A (low-complexity) and Attack B (high-complexity).
Attack A, the low-complexity example, had the following characteristics:
~150,000 login attempts
1 distinct User-Agent (a widely used version of Chrome)
~1,500 distinct IP addresses (85% from the USA, where most of the app users reside)
Attack B, the high-complexity example, had the following characteristics:
~60,000 login attempts
~20,000 distinct User-Agents
~5,000 distinct IP addresses(>95% from USA, >99% from USA & Canada)
You can see that with Attack A, there were about 100 login attempts per IP address. IP rate-limiting may be effective here, depending on the limit.
However, with Attack B, there were only 12 login attempts per IP address. It would be really hard to argue that IP rate-limiting would be effective for this scenario.
Here's my post with more in-depth data: https://blog.castle.io/how-effective-is-castle-against-credential-stuffing/
Full disclosure: I work for Castle and we offer an API-based product to guard against credential stuffing.

reject queries based on specific client_app_name and nt_username

With a surge of applications that can be used to pull information, my sql server is constantly getting tapped, and there are a couple of users that keeps running refresh. Is there a way to reject query based on specific client_app_name and nt_username?
Alternatively, is there a way to add the combination of the user and the app to security to decline access to SQL? i.e. Approve the user access if client_appname is excel but decline if the appname is 'Mashup Engine'.
What you really need is resource governance. With it you can restrict the resources a user can consume. This way the users can refresh as much as they like, but they won't be able to consume the server resources, their queries will instead slow down as they are exhausting the allowed resources. Other users will still be able to run queries at full speed.
The assignment of users to resource groups ('pools') is based on a classification function run at login time, and this function can consider user name, application name, workstation name, client IP etc.

Architecture for fast globally distributed user quota management

We have build a free globally distributed mobility analytics REST API. Meaning we have servers all over the world which run different versions (USA, Europe, etc..) of the same application. The services are behind a load balancer so I can't guarantee that the same user always get's the same application/server if he/she does requests today or tomorrow. The API is public but users have to provide an API key in order for us to match them to their paid request quota.
Since we do heavy number crunching with every request, we want to minimize request times as far as possible, inparticular for authentication/authorization and quota monitoring. Since we currently only use one user database (which has to be located in a single data center) there are cases where users in the US make a request to an application/server in the US which authenticates the user in Europe. So we are looking for a solution where the user database interaction:
happens on the same application server
get's synchronized between all application servers
should be easily integrable into java application
should be fast (changes happen in every request)
Things we have done so far:
a single database on each server > not synchronized, nightmare
a single database for all servers > ok, when used with slave as fallback but American users have to authenticate over the Atlantic
started installing bdr but failed on the way (no time, too complex, hard to make transition)
looked at redis.io
Since this is my first globally distributed REST API I wonder how other companies do this. (yelp, Google, etc.)
Any feedback is very kindly appreciated,
Cheers,
Daniel
There is no right answer, there are several ways to perform this. I'll describe the one I'm most familiar with (and which probably is one of the simplest ways to do it, although probably not the most robust one).
1. Separate authentication and authorization
First of all separate authentication and authorization. An user authenticating across the Atlantic is fine, every user request that requires authorization to go across the Atlantic is not. Main user credentials (e.g. a password hash) are a centralized resource, there is no way around that. But you do not need the main credentials every time a request needs to be authorized.
This is how a company I worked at did it (although this was postgres/python/django, nothing to do with java):
Each server has a cache of database queries in memcached but it does not cache user authentication (memcached is very similar to redis, which you mention).
The authentication is performed in the main data centre, always.
A successful authentication produces a user session that expires after a day.
The session is cached.
This may produce a situation in which a user can have an action authorized by more than one session at a given time. The algorithm is that: if at least one user session has not expired, the user is authorized. Since the cache is local, there is no need for a costly operation of fetching data across the Atlantic for every request.
2. Session revival
Sessions expiring after exactly a day may be annoying for the user, since he can never be sure that his session will not expire in the next couple of seconds. Each request the user makes that is authorized by a session shall extend the session lifetime to a full day again.
This is easy to implement: You need to keep in the session the timestamp of the last request made. And count the session lifetime based on that timestamp. If more than a single session may authorize a request the algorithm shall select the younger session (with the longest lifetime remaining) to update.
3. Request logging
This is as far as the live system I worked with went. The remainder of the answer is about how I would extend such a system to make space for logging the requests and verifying request quota.
Assumptions
Lets start by making a couple of design assumptions and argue why they are good assumptions. You're now running a distributed system, since not all data on the system is in a single place. A distributed system shall give priority to response speed and be as horizontal (not hierarchical) as possible, to achieve this consistency is sacrificed.
The session mechanism above already sacrifices some consistency. For example, if a user logs in and talks continuously to server A for a day and half and then a load balancer points the user to server B, the user may be surprised that he needs to login again. That is an unlikely situation: in most cases a load balancer would divide the user's requests equally over both server A and server B over the course of that day and half. And both, server A and server B will have live sessions at that point.
In your question you wonder how google deals with request counting. And I'll argue here that it deals with it in an inconsistent way. By that, I mean the fact that you cannot enforce a strict upper limit of requests if you're sacrificing consistency. Companies like google or yelp simply say:
"You have 12000 requests per month but if you do more than that you will pay $0.05 for every 10 requests above that".
That allows for an easier design: you can count requests at any time after they happened, the counting does not need to happen in real time.
One last assumption: a distributed system will have problems with duplicated internal data. This happens because parts of the system are running in real time and parts are doing batch processing without stopping or timestamping the real time system. That happens because you cannot be 100% sure about the state of the real time system at any given point. It is mandatory that every request coming from a customer have a unique identifier of some sort, it can be as simple as customer number + sequence number, but it needs to exist in every request. You may also add such a unique identifier the moment you receive the request.
Design
Now we extend our user session that is cached on every server (often cached in a different state on different servers that are unaware of each other). For every customer request we store the request's unique identifier as part of the session cache. That's right, the request counter in the central database is not updated in real time.
At a point in time (end of day processing, for example) each server performs a batch processing of request identifiers:
Live sessions are duplicated, and the copies are expired.
All request identifiers in expired sessions are concatenated together, and the central database is written to.
All expired sessions are purged.
This produces a race condition, where a live session may receive requests whilst the server is talking to the central database. For that reason we do not purge request identifiers from live sessions. This, in turn, causes a problem where a request identifier may be logged to the central database twice. And it is for that reason that we need the identifiers to be unique, the central database shall ignore request updates with identifiers already logged.
Advantages
99.9% Uptime, the batch processing does not interrupt the real time system.
Reduced writes to the database, and reduced communication with the database in general.
Easy horizontal growth.
Disadvantages
If a server goes down, recovering the requests performed may be tricky.
There is no way to stop a customer from doing more requests than he is allowed to (that's a peculiarity of distributed systems).
Need to store unique identifiers for all requests, a counter is not enough to measure the number of requests.
Invoicing the user does not change, you just query the central database and see how many requests a customer performed.

How do you prevent brute force attacks on RESTful data services

I'm about to implement an RESTful API to our website (based on WCF data services, but that probably does not matter).
All data offered via this API belongs to certain users of my server, so I need to make sure only those users have access to my resources. For this reason, all requests have to be performed with a login/password combination as part of the request.
What's the recommended approach for preventing brute force attacks in this scenario?
I was thinking of logging failed requests denied due to wrong credentials and ignoring requests originating from the same IP after a certain threshold of failed requests has been exceeded. Is this the standard approach, or am I a missing something important?
IP-based blocking on its own is risky due to the number of NAT gateways out there.
You might slow down (tar pit) a client if it makes too many requests quickly; that is, deliberately insert a delay of a couple of seconds before responding. Humans are unlikely to complain, but you've slowed down the bots.
I would use the same approach as I would with a web site. Keep track of the number of failed login attempts within a certain window -- say allow 3 (or 5 or 15) within some reasonable span, say 15 minutes. If the threshold is exceeded lock the account out and mark the time that the lock out occurred. You might log this event as well. After another suitable period has passed, say an hour, unlock the account (on the next login attempt). Successful logins reset the counters and last lockout time. Note that you never actually attempt a login on a locked out account, you simply return login failed.
This will effectively rate-limit any brute force attack, rendering an attack against a reasonable password very unlikely to succeed. An attacker, using my numbers above would only be able to try 3 (or 5 or 15) times per 1.25hrs. Using your logs you could detect when such an attack were possibly occurring simply by looking for multiple lockouts from the same account on the same day. Since your service is intended to be used by programs, once the program accessing the service has its credentials set properly, it will never experience a login failure unless there is an attack in progress. This would be another indication that an attack might be occurring. Once you know an attack is in process, then you can take further measures to limit access to the offending IPs or involve authorities, if appropriate, and get the attack stopped.

Licenses and sessions the RESTful way

This question crossed my mind after I read this post:
“Common REST Mistakes: Sessions are irrelevant”
If sessions are indeed discouraged in a RESTful application. How would you handle licenses in such application. I'm specifically referring to concurrent licenses model and not named licenses. i.e. the customer buys X licenses which means the application may allow up to X users to be logged in simultaneously. Which means that the application must hold a state for current logged in users.
I know I can create a resource called licenses, which will set a cookie or generate a unique ID, and then the client will have to send it with every request. But it's the same as creating a session, right?
If I'll adopt the stateless approach and ask the client to create an authentication token for every request how will the application know when to consume and release license for that client?
Is there an alternative? specifically a more RESTful alternative?
Let me try to connect the dots for you, assuming I interpreted your question correctly.
The link you posted has a valid answer, each request should use a HTTP auth. If you need the concept of licences to maintain a certain state for your user, you can most likely link that to the user. You have a (verified) username to go by. You just need to invoke that controller for each request and save its state. There you have your session.
Cookie input should never be trusted for any critical information, but can be very useful for extra verification like a security token. I think adding a random security token field to your on-site links would be the restful approach to that. It should expire with the 'session', of course.
You may want to consider pushing the license handling concerns down the infrastructure stack one level. Sort of like an Aspect Oriented Programming (AOP) approach if you will. Instead of handling it in the application tier, perhaps, you can push it in to the web server tier.
Without knowing the details of your infrastructure, it is hard to give a specific recommendation. Using the *nix platform as an example, the license handling logic can be implemented as a module for Apache HTTP server.
This approach promotes a separation of concerns across your infrastructure stack. It allows each layer to focus on what it is meant to. The application layer does not need to worry about licensing at all, allowing it to focus strictly on content, which in turn keeps the URL's clean and "RESTful".
If your licensing is based on concurrent users, implementing HTTP digest is trivial, and will let you enable only the maximum number of concurrent logins. Digest has provision for passing expiration data so your session can be timed-out.
Authentication state is hold by http authetnication and nowhere else, beause it is transparent and ubiquituous.
Maybe a more RESTful way of doing licenses would be to limit the rate at which requests are handled, rather than the number of concurrent sessions. Keep track of the number of requests in the last hour, and if it exceeds the number the customer has paid for, serve a 503 Service Unavailable response, along with some text suggesting the user try again later.