Block user who refreshes web too often [duplicate] - apache

What techniques and/or modules are available to implement robust rate limiting (requests|bytes/ip/unit time) in apache?

The best
mod_evasive (Focused more on reducing DoS exposure)
mod_cband (Best featured for 'normal' bandwidth control)
and the rest
mod_limitipconn
mod_bw
mod_bwshare

As stated in this blog post it seems possible to use mod_security to implement a rate limit per second.
The configuration is something like this:
SecRuleEngine On
<LocationMatch "^/somepath">
SecAction initcol:ip=%{REMOTE_ADDR},pass,nolog
SecAction "phase:5,deprecatevar:ip.somepathcounter=1/1,pass,nolog"
SecRule IP:SOMEPATHCOUNTER "#gt 60" "phase:2,pause:300,deny,status:509,setenv:RATELIMITED,skip:1,nolog"
SecAction "phase:2,pass,setvar:ip.somepathcounter=+1,nolog"
Header always set Retry-After "10" env=RATELIMITED
</LocationMatch>
ErrorDocument 509 "Rate Limit Exceeded"

There are numerous way including web application firewalls but the easiest thing to implement if using an Apache mod.
One such mod I like to recommend is mod_qos. It's a free module that is veryf effective against certin DOS, Bruteforce and Slowloris type attacks. This will ease up your server load quite a bit.
It is very powerful.
The current release of the mod_qos module implements control mechanisms to manage:
The maximum number of concurrent requests to a location/resource
(URL) or virtual host.
Limitation of the bandwidth such as the
maximum allowed number of requests per second to an URL or the maximum/minimum of downloaded kbytes per second.
Limits the number of request events per second (special request
conditions).
Limits the number of request events within a defined period of time.
It can also detect very important persons (VIP) which may access the
web server without or with fewer restrictions.
Generic request line and header filter to deny unauthorized
operations.
Request body data limitation and filtering (requires mod_parp).
Limits the number of request events for individual clients (IP).
Limitations on the TCP connection level, e.g., the maximum number of
allowed connections from a single IP source address or dynamic
keep-alive control.
Prefers known IP addresses when server runs out of free TCP
connections.
This is a sample config of what you can use it for. There are hundreds of possible configurations to suit your needs. Visit the site for more info on controls.
Sample configuration:
# minimum request rate (bytes/sec at request reading):
QS_SrvRequestRate 120
# limits the connections for this virtual host:
QS_SrvMaxConn 800
# allows keep-alive support till the server reaches 600 connections:
QS_SrvMaxConnClose 600
# allows max 50 connections from a single ip address:
QS_SrvMaxConnPerIP 50
# disables connection restrictions for certain clients:
QS_SrvMaxConnExcludeIP 172.18.3.32
QS_SrvMaxConnExcludeIP 192.168.10.
http://opensource.adnovum.ch/mod_qos/

In Apache 2.4, there's a new stock module called mod_ratelimit. For emulating modem speeds, you can use mod_dialup. Though I don't see why you just couldn't use mod_ratelimit for everything.

Sadly, mod_evasive won't work as expected when used in non-prefork configurations (recent apache setups are mainly MPM)

Depends on why you want to rate limit.
If it's to protect against overloading the server, it actually makes sense to put NGINX in front of it, and configure rate limiting there. It makes sense because NGINX uses much less resources, something like a few MB per ten thousand connections. So, if the server is flooded, NGINX will do the rate limiting(using an insignificant amount of resources) and only pass the allowed traffic to Apache.
If all you're after is simplicity, then use something like mod_evasive.
As usual, if it's to protect against DDoS or DoS attacks, use a service like Cloudflare which also has rate limiting.

One more option - mod_qos
Not simple to configure - but powerful.
http://opensource.adnovum.ch/mod_qos/

Related

Fail2ban SSH MaxStartups how to ban?

my SFTP-Server with fail2ban and UFW is running well. But now I recognized that it seems that IP-adresses are not banned when there connection is dropped by "past MaxStartups".
My auth.log looks like
Feb 26 23:19:42 SFTPSERVER sshd[2719]: drop connection #10 from [1xx.1xx.xx.xx]:57970 on [xx.xx.xx.xx]:22 past MaxStartups
In the fail2ban.log there is no entry for that ip. So it seems that this logentry is not filtered by fail2ban. How can I add a filter for this log entry? Do I have to add some regex to the sshd.conf in filter.d of my fail2ban folder?
MaxStartups can hit anyone, i.e. it's the maximum amount of unauthenticated connections (from ALL sources, combined) before the SSH daemon will kill any new ones, until some of those unauthenticated connections have dropped.
So, Fail2Ban not banning on those, is correct... (else, it might ban you, before you even had a chance to authenticate...)
Enable more fail2ban rules:
Banning IP addresses bases on being "past MaxStartups" would be quite random and it is not implemented for that reason.
Rather, consider banning IP's that contribute to MaxStartups counters while not being covered by the default fail2ban rules. The rules are typically located in:
/etc/fail2ban/filter.d/sshd.conf
Specifically, look for rules that match the term 'preauth' - those are not included in defaults since they are not conclusively attacks (it could be you trying to connect but the network failed just at that time). They do however significantly contribute to your MaxStartups counter as they commonly dangle for a long time before timing out or similar. If your ban is not too aggressive, you might want to include those by uncommenting them.
Other strategies:
Increase MaxStartups and just allow that background noise to consume a bit of your resources
Change your SSH port to drastically reduce background noise (troublesome for many apps, though)
Set IP filters in your firewall to limit sources allowed to connect to SSH
Use port knocking (this practically reduces SSH use to your own though)

It is possible to find out the version of Apache HTTPD when ServerSignature is off?

I have a question. Can I find out the version of Apache when full signature is disabled. Is it even possible? If it is, how? I think that is possible because blackhats hacking big, corporate servers while knowledge of the version of the victim services is essential. What do you think? Thanks.
Well for a start there are two (or even three) things to hide:
ServerHeader - which shows the version in the Server response field. This cannot be turned of in Apache config but can be reduced to just "Apache".
ServerSignature - which displays the server version in the footer of error pages.
X-Powered-By which is not used by Apache but used by back end servers and services it might send requests to (e.g. PHP, J2EE servers... etc.).
Now servers do show some information due to differences in how the operate or how they interpret spec. For example the order of response headers, capitalisation, how they respond to certain requests all give clues to what server software might be being used to answer http requests. However using this to fingerprint a specific version of that software is more tricky unless there was an obvious, observable change from the client side.
Other options include looking at server status-page - though you would hope any administrator clever enough to reduce the default server header would also restrict access to the server-status page. Or through another security hole (e.g. being able to upload load executable scripts or the like).
I would guess most hackers would more be aware of bugs and exploits in some versions of Apache or other web servers and try to see if any of those can be exploited rather than trying to guess the specific version first.
In fact, as an interesting aside, Apache themselves have long been of the opinion that hiding server header information is pointless "security through obscurity" (a point I, and many others, disagree with them on) even putting this in their documention:
Setting ServerTokens to less than minimal is not recommended because
it makes it more difficult to debug interoperational problems. Also
note that disabling the Server: header does nothing at all to make
your server more secure. The idea of "security through obscurity" is a
myth and leads to a false sense of safety.
And even allowing open access to their server-status page.

How can I limit a users total bandwidth across multiple webservers and connections?

I have a web-service where users download files tunneled via apache2 reverse proxies.
I am using mod rewrite and the P flag in conjunction with a rewrite map.
Basically it looks like this:
<Location /my-identifier>
RewriteEngine on
RewriteRule /my-identifier/(.*) ${my-rewrite-map:$1} [P]
</Location>
I know I can limit the bandwidth of one connection or even one ip-address per server using mod_bandwidth or something similar.
However I want the limit to take effect only for certain users (namely those who make a lot of traffic and exceeded the fair use volumes).
I also want it to span across multiple servers.
It is possible for me to set a custom environment variable, if that helps (I have full control over the URL where I can encode it into and can set it using the rewrite rule)!
Basically what I want is for example for a user who reached their limit to get only 5 mbps of speed, no matter how many connections they use or how many servers they connect to.
Is it somehow possible? Is there a module?
My thought would be a centralized data-store where the servers report their traffic stats per ip to. Probably some sort of RRD data structure. Then they can select the traffic for the ip over a specified time interval (for example the last 60 seconds) and apply a throttle factor according to it.
But I really don't want to do this all by myself, I could but it would take me months... I am also not bound to apache. I am also using Nginx servers for the same thing, if there is something for Nginx I can switch to it!
I don't know about Apache, but since you listed Nginx as a tag, how about something like the approach below?
Set up Nginx as a reverse proxy to your Apache servers or web-services with more or less the following configuration:
upstream serverlist {
server www1.example.com;
server www2.example.com;
server www3.example.com;
}
location / {
proxy_pass http://serverlist;
}
The "overall connections" requirement you have is not directly mappable, but you can probably get reasonably close to what you want with a combination of the following directives added to the location block:
limit_rate this is per connection
limit_con this allows you to limit the number of connections
limit_req this allows you to limit the number of requests/sec and allowable bursts
limit_zone sets up the zone for your limits
UPDATE:
There's a 3th party Nginx module limiting the overall rate per IP to be found here.

Can you use gzip over SSL? And Connection: Keep-Alive headers

I'm evaluating the front end performance of a secure (SSL) web app here at work and I'm wondering if it's possible to compress text files (html/css/javascript) over SSL. I've done some googling around but haven't found anything specifically related to SSL. If it's possible, is it even worth the extra CPU cycles since responses are also being encrypted? Would compressing responses hurt performance?
Also, I'm wanting to make sure we're keeping the SSL connection alive so we're not making SSL handshakes over and over. I'm not seeing Connection: Keep-Alive in the response headers. I do see Keep-Alive: 115 in the request headers but that's only keeping the connection alive for 115 milliseconds (seems like the app server is closing the connection after a single request is processed?) Wouldn't you want the server to be setting that response header for as long as the session inactivity timeout is?
I understand browsers don't cache SSL content to disk so we're serving the same files over and over and over on subsequent visits even though nothing has changed. The main optimization recommendations are reducing the number of http requests, minification, moving scripts to bottom, image optimization, possible domain sharding (though need to weigh the cost of another SSL handshake), things of that nature.
Yes, compression can be used over SSL; it takes place before the data is encrypted so can help over slow links. It should be noted that this is a bad idea: this also opens a vulnerability.
After the initial handshake, SSL is less of an overhead than many people think* - even if the client reconnects, there's a mechanism to continue existing sessions without renegotiating keys, resulting in less CPU usage and fewer round-trips.
Load balancers can screw with the continuation mechanism, though: if requests alternate between servers then more full handshakes are required, which can have a noticeable impact (~few hundred ms per request). Configure your load balancer to forward all requests from the same IP to the same app server.
Which app server are you using? If it can't be configured to use keep-alive, compress files and so on then consider putting it behind a reverse proxy that can (and while you're at it, relax the cache headers sent with static content - HttpWatchSupport's linked article has some useful hints on that front).
(*SSL hardware vendors will say things like "up to 5 times more CPU" but some chaps from Google reported that when Gmail went to SSL by default, it only accounted for ~1% CPU load)
You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways.
You can selectively use HTTP compression
You can always minify
Let's talk about caching too
I am going to assume you are using an HTTPS Everywhere style web site.
Scenario:
Static content like css or js:
Use HTTP compression
Use minification
Long cache period (like a year)
etag is only marginally useful (due to long cache)
Include some sort of version number in the URL in your HTML pointing to this asset so you can cache-bust
HTML content with ZERO sensitive info (like an About Us page):
Use HTTP compression
Use HTML minification
Use a short cache period
Use etag
HTML content with ANY sensitive info (like a CSRF token or bank account number):
NO HTTP compression
Use HTML minification
Cache-Control: no-store, must-revalidate
etag is pointless here (due to revalidation)
some logic to redirect the page after session timeout (taking into account multiple tabs). If someone presses the browser's Back button, the sensitive info is not displayed due to the cache header.
You can use HTTP compression with sensitive data IF:
You never return user input in the response (got a search box? don't use HTTP compression)
Or you do return user input in the response but randomly pad the response
Using compression with SSL opens you up to vulnerabilities like BREACH, CRIME, or other chosen plain-text attacks.
You should disable compression as SSL/TLS have no way to currently mitigate against these length oracle attacks.
To your first question: SSL is working on a different layer than compression. In a sense these two are features of a web server that can work together and not overlap. Yes, by enabling compression you'll use more CPU on your server but have less of outgoing traffic. So it's more of a tradeoff.
To your second question: Keep-Alive behavior is really dependent on HTTP version. You could move your static content to a non-ssl server (may include images, movies, audio, etc)

How can I implement rate limiting with Apache? (requests per second)

What techniques and/or modules are available to implement robust rate limiting (requests|bytes/ip/unit time) in apache?
The best
mod_evasive (Focused more on reducing DoS exposure)
mod_cband (Best featured for 'normal' bandwidth control)
and the rest
mod_limitipconn
mod_bw
mod_bwshare
As stated in this blog post it seems possible to use mod_security to implement a rate limit per second.
The configuration is something like this:
SecRuleEngine On
<LocationMatch "^/somepath">
SecAction initcol:ip=%{REMOTE_ADDR},pass,nolog
SecAction "phase:5,deprecatevar:ip.somepathcounter=1/1,pass,nolog"
SecRule IP:SOMEPATHCOUNTER "#gt 60" "phase:2,pause:300,deny,status:509,setenv:RATELIMITED,skip:1,nolog"
SecAction "phase:2,pass,setvar:ip.somepathcounter=+1,nolog"
Header always set Retry-After "10" env=RATELIMITED
</LocationMatch>
ErrorDocument 509 "Rate Limit Exceeded"
There are numerous way including web application firewalls but the easiest thing to implement if using an Apache mod.
One such mod I like to recommend is mod_qos. It's a free module that is veryf effective against certin DOS, Bruteforce and Slowloris type attacks. This will ease up your server load quite a bit.
It is very powerful.
The current release of the mod_qos module implements control mechanisms to manage:
The maximum number of concurrent requests to a location/resource
(URL) or virtual host.
Limitation of the bandwidth such as the
maximum allowed number of requests per second to an URL or the maximum/minimum of downloaded kbytes per second.
Limits the number of request events per second (special request
conditions).
Limits the number of request events within a defined period of time.
It can also detect very important persons (VIP) which may access the
web server without or with fewer restrictions.
Generic request line and header filter to deny unauthorized
operations.
Request body data limitation and filtering (requires mod_parp).
Limits the number of request events for individual clients (IP).
Limitations on the TCP connection level, e.g., the maximum number of
allowed connections from a single IP source address or dynamic
keep-alive control.
Prefers known IP addresses when server runs out of free TCP
connections.
This is a sample config of what you can use it for. There are hundreds of possible configurations to suit your needs. Visit the site for more info on controls.
Sample configuration:
# minimum request rate (bytes/sec at request reading):
QS_SrvRequestRate 120
# limits the connections for this virtual host:
QS_SrvMaxConn 800
# allows keep-alive support till the server reaches 600 connections:
QS_SrvMaxConnClose 600
# allows max 50 connections from a single ip address:
QS_SrvMaxConnPerIP 50
# disables connection restrictions for certain clients:
QS_SrvMaxConnExcludeIP 172.18.3.32
QS_SrvMaxConnExcludeIP 192.168.10.
http://opensource.adnovum.ch/mod_qos/
In Apache 2.4, there's a new stock module called mod_ratelimit. For emulating modem speeds, you can use mod_dialup. Though I don't see why you just couldn't use mod_ratelimit for everything.
Sadly, mod_evasive won't work as expected when used in non-prefork configurations (recent apache setups are mainly MPM)
Depends on why you want to rate limit.
If it's to protect against overloading the server, it actually makes sense to put NGINX in front of it, and configure rate limiting there. It makes sense because NGINX uses much less resources, something like a few MB per ten thousand connections. So, if the server is flooded, NGINX will do the rate limiting(using an insignificant amount of resources) and only pass the allowed traffic to Apache.
If all you're after is simplicity, then use something like mod_evasive.
As usual, if it's to protect against DDoS or DoS attacks, use a service like Cloudflare which also has rate limiting.
One more option - mod_qos
Not simple to configure - but powerful.
http://opensource.adnovum.ch/mod_qos/