simple form input triggers 403 forbidden page - apache

After Apache upgrade on my shared server, I have been having nightmare issues with the form input on all of my reseller hosting accounts. The clients get 403 (or in case of Wordpress install, 404, which has really confused me) after the simplest, most innocent looking form input. For example "he is having a lot of trouble" in a text field results in 403!
It took almost two weeks to figure out what's going on, as the error seemed random and hard to replicate, but after I asked for exact text from the clients that they were not able to enter, we got to the modsec issue. The answer from tech help was "While checking the issue in detail, we found that a mod_security rule was getting triggered on the server while trying to submit the content as "he is having a lot of trouble". We have whitelisted the rule for the website which resolved the issues.".
My question is - how can I deal with this proactively? Is there a list of rules for mod_security that I can check, test some input, ask for additional whitelisting etc? With about a 100 accounts all having problems, it's enough to want to get out of the hosting business all together.

I don't understand your scenario or your question. Are you managing the host or not?
It sounds like you are hosting sites on a shared server so do not have access to the full server but are setting hosts up for clients - is that right?
Running a WAF like ModSecurity requires monitoring log files to identify false positives like this. If you do not have access to the log files then you need to ask your hosting provider what there options are for managing this sort of thing? Or will they do nothing until you raise it?
You can also ask to turn off ModSecurity completely. Most sites get on fine without a WAF - though personally I think they do add value and security.
Finally as to what rules are running on your instance only your tech help can answer that. ModSecurity itself is only an engine and comes with no rules. People can write their own, but some, or use free sets of rules like the OWASP Core Rule Set. So depending what you have would depend how you can test this. Most rules are fairly generic in nature so do result in false positives unless tweaked.

Related

Is the malicious botting, how to prevent?

I recently set up a subdomain on my website with the intention to soon clone my website for testing purposes. Subdomain was "beta", so beta.example.com
It was set up and password protected via htaccess and is directed through Cloudflare, it's about three days old and was never announced publicly (only I know of it).
Today I notice this on my Apache Server Stats page:
Also, CPU load was increasing and very, very high. Upon refreshing, this continued and is actually still continuing right now. Is this some sort of botting/brute force attack? I can't imagine how/why else so many IPs would be accessing this unlinked/private subdomain. I've since taken it down from Cloudflare DNS and the IPs are still connecting somehow, I assume it will take time for it to propagate.
Is this malicious? And how can it be prevented? I assume it was/is attempting to brute force the htaccess password? Is it because it's a common subdomain name? ("beta") - would it matter? Again, it's only been about three days so damn they work fast.
It can be search engine robots, It can be script kiddies, It can be brute force, you can have more information in your log file or by analyzing IP address.
I'm not sure to really understand your problem and what you want.
If you website is online, so yes some people/bots/robots will try to access to it, like any other website.
If you don't want than anybody access to your website, you can add an IP restriction.

It is possible to find out the version of Apache HTTPD when ServerSignature is off?

I have a question. Can I find out the version of Apache when full signature is disabled. Is it even possible? If it is, how? I think that is possible because blackhats hacking big, corporate servers while knowledge of the version of the victim services is essential. What do you think? Thanks.
Well for a start there are two (or even three) things to hide:
ServerHeader - which shows the version in the Server response field. This cannot be turned of in Apache config but can be reduced to just "Apache".
ServerSignature - which displays the server version in the footer of error pages.
X-Powered-By which is not used by Apache but used by back end servers and services it might send requests to (e.g. PHP, J2EE servers... etc.).
Now servers do show some information due to differences in how the operate or how they interpret spec. For example the order of response headers, capitalisation, how they respond to certain requests all give clues to what server software might be being used to answer http requests. However using this to fingerprint a specific version of that software is more tricky unless there was an obvious, observable change from the client side.
Other options include looking at server status-page - though you would hope any administrator clever enough to reduce the default server header would also restrict access to the server-status page. Or through another security hole (e.g. being able to upload load executable scripts or the like).
I would guess most hackers would more be aware of bugs and exploits in some versions of Apache or other web servers and try to see if any of those can be exploited rather than trying to guess the specific version first.
In fact, as an interesting aside, Apache themselves have long been of the opinion that hiding server header information is pointless "security through obscurity" (a point I, and many others, disagree with them on) even putting this in their documention:
Setting ServerTokens to less than minimal is not recommended because
it makes it more difficult to debug interoperational problems. Also
note that disabling the Server: header does nothing at all to make
your server more secure. The idea of "security through obscurity" is a
myth and leads to a false sense of safety.
And even allowing open access to their server-status page.

How to create a friendly url in Tomcat?

I want to modify my application URL from //localhost:8080/monitor/index.html to just monitor , so that on putting monitor on browser, my application should open. Is there a way to achieve this, can someone suggest the configuration changes which will be required for this.
Can I map my short URL to the existing one may be somewhere in web.xml. I am not sure about the approach any suggestions will be great.
Thanks and regards
Deb
You're mixing up several different protocol layers in your question.
If you just enter nothing but "monitor" in the browser URL bar the browser is going to first lookup "monitor" in DNS and finding nothing it will then probably send a query to Google or your configured search engine. In the past browsers have taken other steps, such as appending ".com" and prepending "www." but I don't think modern browsers do that any more.
So far, your server is not even remotely involved.
If you're a large ISP user (TimeWarner, Comcast) and use their DNS it's also possible the ISP will intercept your failed DNS lookup and route the request to a "helpful" search page (i.e. SPAM) of their own.
At this point the request is still nowhere near your server.
I suppose you could mess with the /etc/hosts file on your local system to resolve "monitor" to the proper hostname, but that's an extremely brittle solution that has to be hard coded on each machine you want to have this "shortcut" link (and which breaks when the hostname changes).
You're much better off just setting up a web shortcut in your browser that points to the right place.

Is there any way to (temporarily) block an IP based on the requests it makes?

Background: we're running Drupal 6 on an Apache server. I've scoured the internet but can't seem to find anything on exactly what I'm looking for, maybe someone here has an idea.
As a website with a decent amount of traffic, we tend to get a lot of low-level attack attempts. Any time I look through logs, there's at least a handful of "page not found" errors from script kiddies and bots trying to access pages like wp-login.php or admin.php. Obviously these attacks never get very far, but it's still a sometimes significant load on our server to serve up all the 404s.
These attempts are often quite amateurish. Generally, they all come from one IP address over a period of a few minutes. So I'm wondering if there's some way to implement temporary blocks by IP address for anyone who tries something that's obviously an attack. For example, maybe there's a way to configure .htaccess to say:
If (bot_IP tries to access wp_login.php, admin.php, administer/index.php, phpmyadmin.php and so on)
Deny from bot_IP for next four hours
Has anyone ever tried anything like this? It would be wonderful if we could reduce the amount of time we spend sending 404s to attackers, and it seems to me like a lot of people could find it useful.
Thanks!
If you do this in a Drupal module, you'll spend more resources checking logs and filtering requests than you do sending 404's. If you have root access to the server, http://www.fail2ban.org/wiki/index.php/Main_Page will work very well. It scans the apache error logs, uses regex based rules to match log entries and updates the OS firewall rules to handle the blocking.

Need advice on a secure webserver for clients to log into and view data

Hey guys, i've been googling ambitiously but my searches seem to be somewhat ambiguous so I thought i'd ask here.
My company has asked me to look into a web portal system that allows clients to log in via their browser and view/download their specific invoices / reports (the web server would be inhouse).
These (initially at least) would be static documents, pdf's, maybe excel spreadsheets and the like.
What I want to happen is a customer heads to our website (hosted elsewhere), clicks a link that takes them to a secure login for our webserver, they then enter their login details and are taken to their respective 'folder' on our webserver. Here they can download pdf's - that we keep up to date.
The main considerations are for it to be secure such that users can't access other users' folders and for users not to have to install anything to be view download their documents.
I'm setting up a pc to be a LAMP server right now, i've read WebDAV would be a good way to go but i'm not sure about how to get that working in a browser? Any advice or resources you guys can point me to give me a bit more direction would be greatly appreciated.
Thanks, Rob
If you've only got a handful of accounts to manage, apache's built-in HTTP Auth password stuff works pretty well; you write usernames and hashed passwords into an .htpasswd file with the htpasswd utility.
Then you use <Location> directives to specify the URL and directories to find the data, and inside the <Location> directives, use the Require directive to either list specific usernames or valid-user.
Just make sure your .htpasswd file isn't stored in the web root. You don't want people to get a hold of the thing and start brute-forcing your passwords (or see your other allowed users, in case client privacy is a priority).
But it is pretty heavy maintenance -- password changes pretty much have to go through a human. I imagine someone has scripts to automate that, but I wouldn't trust them very far. :)
If you want something that scales larger, I think you might be better off building such a tool yourself.