How can I block clients that consistently hit the same unpredictable URLs - apache

My apache server goes down when a random client starts al lot of GET for same url. The problem is it happens with unpredictable url paths. With fail2ban i can ban a predetermined url but not prevent it for unknown url paths. Is there a way to resolve this?

Depending on your Web Server, you should be able to scan your web server logs for GET requests and ban people who make too many of them within a specific time period. You just need to be careful to avoid banning legitimate users, so the frequency of allowable GET requests is something to fine tune carefully.
Create a new Jail Filter: sudo nano /etc/fail2ban/filter.d/GETFlood.conf
Define the regex you need for identifying GET requests based on the logs for your Web server. With a standard Apache access.log, it would be: failregex = ^<HOST>.*\s"GET\s.*$
Add an entry to your /etc/fail2ban/jail.local:
[getflood]
enabled = true
action = iptables-allports[name=getflood]
filter = getflood
logpath = /var/log/apache2/*access.log
maxretry = 30
findtime = 90
bantime = 604800
Here, we let any individual IP Address make up to 30 GET requests every 90 seconds. Again, without more details about your server, you'll need to play around with these timings to avoid banning legitimate users.

Related

How can i match a dynamically changing url in VCR

In my Ruby on Rails application there are 2 Localhost servers running. I am writing test cases for the 1st server and so I have to mock the 2nd server.
For this I am using VCR to record the responses I get from the 2nd server and play the recorded cassette while running the tests on the 1st server.
I am stuck at the part where the 1st server makes a request to 2nd server(the session_id in the URL changes each time) and I want the response to be same every time it makes a request.
Using VCR you can match requests on any parameters you wish (method, host, path, etc...) using the match_requests_on cassette option or a fully custom matcher - https://relishapp.com/vcr/vcr/v/3-0-3/docs/request-matching
I made this work via params ignoring. So for you something like this could work:
VCR.use_cassette('name_of_your_cassette', match_requests_on: [:method, VCR.request_matchers.uri_without_params('session_id')]) do
# here is your http query
end
In my case it was query that was changing so I ignored that in vcr request matcher.

Building Wireshark LDAP filter for future scripting

all
On our environment we have several servers still using ldap for authentication and I need to filter ldap requests for 8 hours to have an overview on how many different accounts we may have exposed.
The issue is I cannot create captures that are too large. (Just for an example, using a filter to ldap port and marking the flags of a TCP request I barely can capture 3 minutes before my capture is hitting 100MB).
I wonder if there is a way to build a capture filter that would look for a HEX on the DATA part of the packet.
Does anyone have any clue on how I would be able to do that?
Currently I am using a capture filter like:
port 389 && tcp[13] == 24
and a display filter like:
ldap.bindRequest_element && ldap.messageID==1
Another issue is that doing this I couldn't come up with a script that would strip the username so I could inform all possibly compromised users to change their PWDs.
Thanks in advance
I found a way to do this.
Just for people that are still curious or would need something similar I used the following:
port 389 && tcp[((tcp[12:1] & 0xf0) >> 2) + 2:2] = 0x0201 && tcp[((tcp[12:1] & 0xf0) >> 2) + 4:1] = 0x01
This check the data part of the packet and since some of the bytes are always in the same position and are all the same for a ldap bind request with message id 1 I used that to build the filter.
Cheers.

How does HAProxy figure out SSL_FC_SNI for map_dom

I have a haproxy config using maps.
HAProxy config file has the below line:
%[ssl_fc_sni,lower,map_dom(/etc/haproxy/domain2backend.map)]
And in the domain2backend.map, i have the below entries:
dp.stg.corp.mydom.com dp_10293
/dp dp_10293
dp.admin.stg.corp.mydom.com dp_10345
Now when i access https://dp.admin.stg.corp.mydom.com/index.html it is directing me to backend dp_10293 . However using a simple full string match of map(/etc/haproxy/domain2backend.map) solves the problem and it directs me to proper backend dp_10345. The certs which i have is wildcard cert *.mydom.com
So how is map_dom comparing the domains and how is it directing request meant for dp.admin.stg.corp.mydom.com to backend of dp.stg.corp.mydom.com
Since i am using map_dom, it splits up the domain based on dots(.) and does a token matching and which ever is the first match, it returns that backend.
Here dp.admin.stg.corp.mydom.com matches any of
dp.admin.stg.corp.mydom.com
/dp
/admin
/stg
/corp
/mydom
/com
/dp.admin
/dp.admin.stg
/dp.admin.stg.corp
/dp.admin.stg.corp.mydom
/admin.stg.corp.mydom.com
/stg.corp.mydom.com
/corp.mydom.com
/mydom.com
And in my case, since i had a entry for /dp, it was routing to backend dp_10345.
Changing map_dom to map will fix it as map does a strict string comparison.

Maintain Session when logged in across all Pages, End session after set time (OWA_COOKIE)

I need to maintain a session for session credentials throughout the web pages I have made. I have next to no experience using OWA_COOKIE and am unsure how to go about it.
I just need it to maintain the session, Finish session if
inactive for 15 mins OR 2. they log out
I have had a whirl at it and this what I have but it doesn't work and am at a lose, can someone help or point me in the right direction?
FUNCTION maintain_session_cookie
AS
TYPE vc_arr IS TABLE OF varchar2(4000)
INDEX BY BINARY_INTEGER.
TYPE cookie IS RECORD (
NAME varchar2(4000),
vals vc_arr,
num_vals integer);
BEGIN
owa_util.mime_header('', FALSE);
owa_cookie.send(
NAME=>'Session_Cookie',
VALUE=>LOWER(),
expires => SYSDATE + 365);
-- Set the cookie and redirect to another page
owa_util.redirect_url();
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
I have been just fiddling to see how it works and provide the functionality that I need.
First of all, it is a quite awkward way to set session lifetime by cookies. You can setup the parameter timeout-secs in either web.xml or weblogic.xml (see Oracle docs).
Your both requirements should be processed by the HTTP server, that's my point of view.
Now, let's say you still want to use cookies (maybe you do not use WebLogic or another reason, whatever). You will face following problems:
You will need to specify these cookies on every page you will display to the user, and not only pages, every ajax call should also have the cookies. So, everything which shows user activity should have this cookie.
Expires parameter should, obviously, be sysdate + interval '15' minute, then your cookie will work exactly for 15 minutes and if you do like it is written in point 1 the cookie will be lost only if there is no activity.
You will have to close the session by yourself if the cookie is not more presented in HTTP request, this is an additional problem.
The thing I want to say is: do it with server configuration and not with cookies. This will save your time and nerves.

How can I load balance FastAGI?

I am writing multiple AGIs using Perl that will be called from the Asterisk dialplan. I expect to receive numerous similtaneous calls so I need a way to load balance them. I have been advised to use FastAGI instead of AGI. The problem is that my AGIs will be distributed over many servers not just one, and I need that my entry point Asterisk dispatches the calls among those servers (where the agis reside) based on their availability. So, I thought of providing the FastAGI application with multiple IP addresses instead of one. Is it possible?
Any TCP reverse proxy would do the trick. HAProxy being one and nginx with the TCP module being another one.
A while back, I've crafted my own FastAGI proxy using node.js (nodast) to address this very specific problem and a bit more, including the ability to run FastAGI protocol over SSL and route requests based on AGI request location and parameters (such as $dnis, $channel, $language, ...)
Moreover, as the proxy configuration is basically javascript, you could actually load balance in really interesting ways.
A sample config would look as follow:
var config = {
listen : 9090,
upstreams : {
test : 'localhost:4573',
foobar : 'foobar.com:4573'
},
routes : {
'agi://(.*):([0-9]*)/(.*)' : function() {
if (this.$callerid === 'unknown') {
return ('agi://foobar/script/' + this.$3);
} else {
return ('agi://foobar/script/' + this.$3 + '?callerid' + this.$callerid);
}
},
'.*' : function() {
return ('agi://test/');
},
'agi://192.168.129.170:9090/' : 'agi://test/'
}
};
exports.config = config;
I have a large IVR implementation using FastAGI (24 E1's all doing FastAGI calls, peaks at about 80% so that's nearly 600 Asterisk channels calling FastAGI). I didn't find an easy way to do load balancing, but in my case there are different FastAGI calls: one at the beginning of the call to validate the user in a database, then a different one to check the user's balance or their most recent transactions, and another one to perform a transacion.
So what I did was send all the validation and simple queries to one application on one server and all the transaction calls to a different application on a different server.
A crude way to do load balancing if you have a lot of incoming calls on zaptel/dahdi channels would be to use different groups for the channels. For example suppose you have 2 FastAGI servers, and 4 E1's receiving calls. You can set up 2 E1's in group g1 and the other 2 E1's in group g2. Then you declare global variables like this:
[globals]
serverg1=ip_of_server1
serverg2=ip_of_server2
Then on your dialplan you call FastAGI like this:
AGI(agi://${server${CHANNEL(callgroup)}}/some_action)
On channels belonging to group g1, that will resolve to serverg1 which will resolve to ip_of_server1; on channels belonging to group g2, CHANNEL(callgroup) will resolve to g2 so you get ${serverg2} which resolves to ip_of_server2.
It's not the best solution because usually calls start coming in on one span and then another, etc so one server will get more work, but it's something.
To get real load balancing I guess we would have to write a FastAGI load balancing gateway, not a bad idea at all...
Mehhh... use the same constructs that would apply to load balancing something like web page requests.
One way is to round robin in DNS. So if you have vru1.example.com 10.0.1.100 and vru2.example.com 10.0.1.101 you put two entries in DNS like...
fastagi.example.com 10.0.1.100
fastagi.example.com 10.0.1.101
... then from the dial plan agi(agi://fastagi.example.com/youagi) should in theory alternate between 10.0.1.100 and 10.0.1.101. And you can add as many hosts as you need.
The other way to go is with something a bit too complicated to explain here but proxy tools like HAProxy should be able to route between multiple servers with the added benefit of being able to "take one out" of the mix for maintenance or do more advanced balancing like distribute equally based on current load.