ASP.Net Core has protection against brute force guessing of passwords, by locking the account after a fixed number of login attempts.
But is there some protection against credential stuffing, where the attacker tries a lot of logins, but always with different usernames? Locking the account would not help, since the account changes on every attempt.
But maybe there is a way to lock an IP against multiple login-attempts or some other good idea to prevent credential stuffing?
I'd recommend using velocity checks with redis, this is basically just throttling certain IPs. Also some fraudsters will also rotate IPs, you could also detect when logins are happening more frequently (say 10x the norm) and start to block all logins for that short window. I wrote a blog post detailing some of the above. The code is all in node, but I did give some high level examples of how we stop fraud at Precognitive (my current gig). I will continue to build upon the code over the next couple of months as I post more in my Account Takeover series.
https://medium.com/precognitive/a-naive-demo-on-how-to-stop-credential-stuffing-attacks-2c8b8111286a
IP throttling is a good first step, but unfortunately it won't block a large number of modern credential stuffing attacks. Tactics for savvy credential stuffers have evolved over the past few years to become more and more complex.
To be really effective at blocking credential stuffing, you need to be looking at more than IP. Attackers will rotate IP and User-Agent, and they'll also spoof the User-Agent value. An effective defense strategy identifies and blocks attacks based on real-time analysis of out-of-norm IP and User-Agent activity, plus additional components to enhance specificity (such as browser-based or mobile-app-based fingerprinting).
I wrote a blog post looking at two credential stuffing attacks in 2020, which I call Attack A (low-complexity) and Attack B (high-complexity).
Attack A, the low-complexity example, had the following characteristics:
~150,000 login attempts
1 distinct User-Agent (a widely used version of Chrome)
~1,500 distinct IP addresses (85% from the USA, where most of the app users reside)
Attack B, the high-complexity example, had the following characteristics:
~60,000 login attempts
~20,000 distinct User-Agents
~5,000 distinct IP addresses(>95% from USA, >99% from USA & Canada)
You can see that with Attack A, there were about 100 login attempts per IP address. IP rate-limiting may be effective here, depending on the limit.
However, with Attack B, there were only 12 login attempts per IP address. It would be really hard to argue that IP rate-limiting would be effective for this scenario.
Here's my post with more in-depth data: https://blog.castle.io/how-effective-is-castle-against-credential-stuffing/
Full disclosure: I work for Castle and we offer an API-based product to guard against credential stuffing.
Related
I need to do some security validation on my program, and one of the things I need to answer, related to authentication is" Verify that all authentication decisions are logged, including linear back offs and soft-locks."
Does anyone knows what linear back off and soft-locks mean?
Thank you in advance,
Thais.
I am doing my research on OWASP ASVS. Actually Linear Back-of and soft-lock are authentication controls that are used to prevent brute force attacks and also can help against DoS.
Linear Back-of can be implemented through some algorithm by blocking user/IP for a particular time and after every failed login attempt that time is increase exponentially e.g. for first failed login block for 5 minute, for second failed login block for 25 minute for 3rd 125 min and so on.
As per my understanding as I have seen in some articles and implemented in some application like Oracle WebLogic Soft lock is much easier to implement, IP Address (Which is I think is also helpful to protect against DoS and Brute force using automated tools) or user name is logged in database for every failed login attempt and when a certain threshold number of failed login attempts (e.g. 5) block IP address permanently. Once the account has been soft locked in application runtime, it does not try to validate the account credentials against the backend system, thus preventing it from being permanently locked.
ASVS Verification requirement is very much clear on this though.
"Verify that a resource governor is in place to protect against vertical (a single account tested against all possible passwords) and horizontal brute forcing (all accounts tested with the same password e.g. “Password1”). A correct credential entry should incur no delay. For example, if an attacker tries to brute force all accounts with the single password “Password1”, each incorrect attempt incurs a linear back off (say 5, 25, 125, 625 seconds) with a soft lock of say 15 minutes for that IP address before being allowed to proceed. A similar control should also be in place to protect each account, with a linear back off configurable with a soft lock against the user account of say 15 minutes before being allowed to try again, regardless of source IP address. Both these governor mechanisms should be active simultaneously to protect against diagonal and distributed attacks."
I'm creating an app where user submissions (e.g. photo) are designed to be captured via crowdsourcing. The app connects to an API using an API key, and the app then submits the data anonymously.
We want to avoid the overhead of people creating user accounts and passwords.
However, it seems to me this is vulnerable to a the problem of the key getting revealed. The result is that spammy submissions could be made much more quickly via browser/wget HTTP requests. Because the app is installed on people's devices, it would take a long time for us to be able to withdraw a key and replace it with another.
The approaches to deal with this problem I can think of are:
Hope that the key stays secret. Not ideal from a risk perspective. Using HTTPS for the API endpoint would reduce this risk, but presumably the app could still be decompiled to reveal it (not that in practice anyone would really bother)
Store a fixed username and password in the app, and submit as that. That basically seems to run the same problem - if the credentials are leaked then this has the same problem as 1.
Require a first-run fetch of a token to auto-create a username and password. However, if the key is compromised then this is no more secure. Also, this means we end up with lots of junky usernames and passwords in our database that really don't mean anything.
Not considered desirable: force users to create a username/password. However, that then means a lot of messing around with accounts, and compromises the anonymity of submissions, meaning data protection implications.
Are there standard patterns dealing with this scenario?
The first time the app runs, it could get a random token from the server, store this, and use it on all subsequent requests. The server just checks that the token is one it produced itself. After each request, block the token for 5 minutes (or make a counter so 10 requests are ok but the 11th gets blocked, depending on your use case). When a token gets misused, block it, so the user will have to deinstall/reinstall your app, or, if he made a script to emulate the app, he'd have to re-register after every few posts (plus you can limit the numer of registrations per IP or something similar).
You can assume any fixed credentials will be compromised. A good attacker can and will reverse-engineer the client. On the flip-side, a username/password combo will compromise anonymity (and nothing is stopping a spammer from creating an account).
Honestly, this is a very difficult problem. The (inelegant) solution involves something like a captcha where you provide a problem that is difficult for a bot but easy for a human to solve (for the record, I think captchas are almost useless, although there have been some less annoying alternatives).
Alternatively, sites like Facebook use sophisticated algorithms to detect spam. (This is a difficult approach so I would not recommend it unless you have the manpower to dedicate to it).
I'm about to implement an RESTful API to our website (based on WCF data services, but that probably does not matter).
All data offered via this API belongs to certain users of my server, so I need to make sure only those users have access to my resources. For this reason, all requests have to be performed with a login/password combination as part of the request.
What's the recommended approach for preventing brute force attacks in this scenario?
I was thinking of logging failed requests denied due to wrong credentials and ignoring requests originating from the same IP after a certain threshold of failed requests has been exceeded. Is this the standard approach, or am I a missing something important?
IP-based blocking on its own is risky due to the number of NAT gateways out there.
You might slow down (tar pit) a client if it makes too many requests quickly; that is, deliberately insert a delay of a couple of seconds before responding. Humans are unlikely to complain, but you've slowed down the bots.
I would use the same approach as I would with a web site. Keep track of the number of failed login attempts within a certain window -- say allow 3 (or 5 or 15) within some reasonable span, say 15 minutes. If the threshold is exceeded lock the account out and mark the time that the lock out occurred. You might log this event as well. After another suitable period has passed, say an hour, unlock the account (on the next login attempt). Successful logins reset the counters and last lockout time. Note that you never actually attempt a login on a locked out account, you simply return login failed.
This will effectively rate-limit any brute force attack, rendering an attack against a reasonable password very unlikely to succeed. An attacker, using my numbers above would only be able to try 3 (or 5 or 15) times per 1.25hrs. Using your logs you could detect when such an attack were possibly occurring simply by looking for multiple lockouts from the same account on the same day. Since your service is intended to be used by programs, once the program accessing the service has its credentials set properly, it will never experience a login failure unless there is an attack in progress. This would be another indication that an attack might be occurring. Once you know an attack is in process, then you can take further measures to limit access to the offending IPs or involve authorities, if appropriate, and get the attack stopped.
If I had a poll on my site, and I didn't want to require a registration to vote, but I only wanted each visit one, how might I do this?
Let's say a visitor from IP 123.34.243.57 visits the site and votes. Would it then be safe to disallow anyone from 123.34.243.* from voting? Is this a good strategy?
What's another one?
This is a fundamental challenge with all voting sites on the Internet, and you're just breaking the surface of the problem.
The way you've phrased it, you "only want to allow each visit one [vote]" indicates that you want to allow them to vote once each time they open their browser and go to the site. I don't think this is really what you seek.
I suspect what you want is that a given individual Person can vote only once ever (per survey, maybe).
The problem is, once you've framed the question properly, the problem becomes much more clear. You're not trying to identify an Internet node (IP address), visit (session cookie), browser instance (persistent cookie), or computer (difficult also to identify).
You can use techniques with Cookies, and they were suitably for a typical user. Subverting this technique is as easy as
- Clearing your cookies in the browser,
- Disallowing cookies in the browser,
- Opening another browser,
- Walking to another computer,
- Using an anonimizer,
- ... endless other ways.
You can do validation by e-mail address, but you indicated you don't want to do registration, so I don't believe that solves you problem either.
If you really need to identify a unique user for a voting system, you'll need to have some authority who's willing to vouch for the identity of any given user, or only allow the software to be accessed from a trusted platform.
The first technique requires registration (and often a costly and time-consuming registration at that), that verifies the actual legal name and location of the individual. Then, using Public Key Infrastructure (aka Digital Certificates), you can identify an individual person based on the credentials he supplies.
The second technique, requiring a trusted platform, relies on the hardware following certain pre-determined behavior. You could, for example, create a voting site that works through the XBox 360 or iPhone. You would create an app that is installed to one of those devices. Based on the way the platform is protected, you could use uniqueness characteristics, such as the hardware address or Live ID on the XBox 360 or the hardware address or telephone number on the iPhone, to get general assurance that the user is the same one who has visited before. Because you have control over the application and the user specifically does not, due to the nature of the trusted platform, you have reasonable assurance that most users will not be able to subvert the intent of the application.
I suspect this is a long-winded way of saying you can do it, but it's a far from easy problem to solve.
Consider political elections and how much resources and energy goes into making those fair and anonymous, and still it's a very challenging problem.
Using the public IP for this would probably be a bad idea. Unique visitors from the same corporate LAN would all look like one user if you use this approach.
Perhaps cookies? I believe that is what most sites use.
Combine with some sort of monitoring, automatic or manually (for instance log file analysis). Be suspicious of traffic patterns that indicate a script.
No, you can't use IP address or IP spans to identify unique users. For several reasons:
Stopping a whole span will stop users who haven't voted.
People who get an IP adress dynamically will get a different IP address later.
People in a local network (like a big company) share the same public IP address.
You could use a cookie to flag who has voted. That will be a lot better as it doesn't hit as blindly, but it's of course not completely accurate as people can clear the cookies and browse with more than one browser.
To make a completely accurate identification of the users so that you are really sure that noone votes more than once, you need a login for the users. Well, with the exception for the fact that people could create more than one account of course...
block a ip range is not a good strategy, you can have 2 option to indentify the already voted user, their IP and cookie. after they voted, set a cookie and don't allow them to vote again.
they can clear cookie and change the IP, but it's acceptable for anonymous voting, if you want a better strategy, let's them register for voting
You should block just that particular IP, not the whole IP range!
If you don't have a registration, this is the best solution, but not for users!
You can prevent someone from voting multiple times. but you also may block some other users from voting and that's because of NAT.
Network Address Translation (or NAT) allows multiple users use a single IP to access internet.
But this is OK because NAT is not used heavily and few users will be disallowed from voting.
However, cookies is not the good solution. because the user can easily erase the browser cookies and vote again. Even worse, he/she can write a script to vote automatically many times!
I've been given the task of finding and evaluating some authentication libraries for use in one of our products and one of the selling features being pushed by some solutions is "two-factor authentication".
What is this method and how does it work? Are there better methods (such as three-factor authentication, I guess)?
Two factor authentication is using two factors to authenticate a person (or sometimes a process).
This might be a PIN number (something you know) and a debit card (something you have).
There are many authentication factors that might be used:
Authentication factors apply for a
special procedure of authenticating a
person as an individual with
definitively granted access rights.
There are different factor types for
authentication:
Human factors are inherently bound to the individual, for example
biometrics ("Something you are").
Personal factors are otherwise mentally or physically allocated to
the individual as for example learned
code numbers. ("Something you know")
Technical factors are bound to physical means as for example a pass,
an ID card or a token. ("Something you
have")
From wikipedia.
Which factors you choose depend on the type of access required, security needed, cost, and especially what people are willing to put up with.
People get irritated with strong passwords that change every 4 months, so you might find employees happier with laptops that have fingerprint scanners and they can use a weak password and a fingerprint - two factor authentication may be easier for users.
But others might not like the privacy implications of biometric security and would rather carry around a keychain device that produces numbers which are typed in along with a password.
High security situations may require all three factors - something you have such as a card, something you are such as retinal imaging, and something you know such as a password.
But the costs and irritation go up as you add more levels.
-Adam
"Are there better methods (such as three-factor authentication, I guess)?"
The issue isn't simply more factors. It's a better mix of factors.
Passwords are easily lost and compromised. People write them on stickies and put them on the bottom of their keyboards.
Other non-password factors are part of the mix. For browser-based apps, you can use IP address, and other PC-specific material that floats in as part of the HTTP headers. For desktop apps (like VPN connections) independent key generators or plug-in USB readers might provide additional factors.
Its when two (or more) different factors are used in conjunction to authenticate someone.
For example, a bank might ask you for your account number and pin number. And sometimes, like when you call call centers, they might ask you for additional factors such as name, dob, phone number, address etc.
The theory is that the more factors you can authenticate against, the higher the probability that you are dealing with the correct person. How well it works and how much more secure it is is debatable in my opinion...
Factors include:
Human factors are inherently bound to the individual, for example
biometrics ("Something you are").
Personal factors are otherwise mentally or physically allocated to
the individual as for example learned
code numbers. ("Something you know")
Technical factors are bound to physical means as for example a pass,
an ID card or a token. ("Something
you have")
See: http://en.wikipedia.org/wiki/Two-factor_authentication
I'll take this from a completely different tact. All these answers are correct, of course, but I want to broaden the topic a bit - to think about where & when to apply two-factor authentication. There are three areas where strong authentication can be used: session authentication, mutual authentication and transaction authentication. Session auth is what most people think about when they think about 2FA. But imagine if people only had to use an OTP when making a banking transaction. The attack surface goes from "when logged in" to "When making a transaction", which is much smaller. if the transaction authentication uses a public key system to sign the tx, then all the better.
Mutual authentication is some system that attempts to thwart MiTM attacks. You can think of the little pictures some banking sites use, but they are totally ineffectual because there is no crypto involved. here's how we do mutual auth, by validating the site's ssl cert for the user: http://www.wikidsystems.com/learn-more/technology/mutual_authentication/. There are other ways to do the same thing, of course.