block all xmlrpc.php requests globally cpanel server - apache

I have Googled around a lot looking for a working example of blocking all xmlrpc.php requests on my cpanel server. I've tried CSF option which seems to work only if the requests come from the same ip (which they do not), I've tried modsecurity option which everyone seems to think works, but it doesnt do anything, requests still come in and are processed for xmlrpc.php... Also tried apache config changes for "Files" with "deny" and all that, also does not work... requests are always still allowed and still processed....
does anyone out there have a working example of how to totally disable xmlrpc.php on all sites on a cpanel server with cloudlinux and cagefs?

I use a pre-main global include (WHM -> Service Configuration -> Apache Configuration -> Pre-Main Include -> All Versions) to block xmlrpc server-wide:
<IfModule mod_alias.c>
RedirectMatch 301 /xmlrpc.php http://127.0.0.1/
</IfModule>

Related

How would you block all referring domains via .htaccess while allowing a certain one through. Not IP address. Script isn't working

I need to have a .htaccess file made to the following specifications.
Allow ONLY "exampledomain.com/sometext" to access "mydomain.com/folder"
Block all other referrers and redirect them to "google.com"
But I am not sure how to go about doing this.
So far this is what I have got. But it is not working.
<If "%{HTTP_HOST} != 'google.com'">
Redirect / http://www.yahoo.com/
</If>
Using the latest Apache. I really appreciate the help here. I have gone through a few other posts but can't seem to figure it out.
Does the 'google.com' have to include the full url or is their a way to make it be a wildcard like *google.com*?
This should point you into the right direction:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^mydomain\.com$
RewriteCond %{HTTP_REFERER} !^exampledomain\.com$
RewriteRule ^ https://www.google.com [R=301,END]
It is a good idea to start out with a 302 temporary redirection and only change that to a 301 permanent redirection later, once you are certain everything is correctly set up. That prevents caching issues while trying things out...
In case you receive an internal server error (http status 500) using the rule above then chances are that you operate a very old version of the apache http server. You will see a definite hint to an unsupported [END] flag in your http servers error log file in that case. You can either try to upgrade or use the older [L] flag, it probably will work the same in this situation, though that depends a bit on your setup.
This implementation will work likewise in the http servers host configuration or inside a distributed configuration file (".htaccess" file). Obviously the rewriting module needs to be loaded inside the http server and enabled in the http host. In case you use a distributed configuration file you need to take care that it's interpretation is enabled at all in the host configuration and that it is located in the host's DOCUMENT_ROOT folder.
And a general remark: you should always prefer to place such rules in the http servers host configuration instead of using distributed configuration files (".htaccess"). Those distributed configuration files add complexity, are often a cause of unexpected behavior, hard to debug and they really slow down the http server. They are only provided as a last option for situations where you do not have access to the real http servers host configuration (read: really cheap service providers) or for applications insisting on writing their own rules (which is an obvious security nightmare).

Cannot remove Apache noindex page

CentOS 7.3.1611
Apache httpd-2.4.6-45.el7.centos.x86_64
I need to replace the default Apache noindex page ("Testing 123..") with a config page for a dev environment.
I tried deleting it but it seems to have permanently cached itself somewhere on the server and I can't get rid of it. I've deleted /etc/httpd/conf.d/welcome.conf as well as the entire /usr/share/httpd/noindex/ directory.
I've rebooted the server and verified it's not the client (same result on different client computers and browsers).
Is there some caching mechanism responsible for this? How do I clear it?
Attempting to change Apache's noindex processing is not a good idea. A better way to do it might be redirecting requests for "/" with a LocationMatch directive in httpd.conf
<LocationMatch "^/$">
Redirect "/" "/config.php"
</LocationMatch>

Apache 2.4 + PHP-FPM and Authorization headers

Summary:
Apache 2.4's mod_proxy does not seem to be passing the Authorization headers to PHP-FPM. Is there any way to fix this?
Long version:
I am running a server with Apache 2.4 and PHP-FPM. I am using APC for both opcode caching and user caching. As recommended by the Internet, I am using Apache 2.4's mod_proxy_fcgi to proxy the requests to FPM, like this:
ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9000/foo/bar/$1
The setup works fine, except one thing: APC's bundled apc.php, used to monitor the status of APC does not allow me to log in (required for looking at user cache entries). When I click "User cache entries" to see the user cache, it asks me to log in, clicking on the login button displays the usual HTTP login form, but entering the correct login and password yields no success. This function is working perfectly when running with mod_php instead of mod_proxy + php-fpm.
After some googling I found that other people had the same issue and figured out that it was because Apache was not passing the Authorization HTTP headers to the external FastCgi process. Unfortunately I only found a fix for mod_fastcgi, which looked like this:
FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization
Is there an equivalent setting or some workaround which would also work with mod_proxy_fcgi?
Various Apache modules will strip the Authorization header, usually for "security reasons". They all have different obscure settings you can tweak to overrule this behaviour, but you'll need to determine exactly which module is to blame.
You can work around this issue by passing the header directly to PHP via the env:
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
See also Zend Server Windows - Authorization header is not passed to PHP script
In some scenarios, even this won't work directly and you must also change your PHP code to access $_SERVER['REDIRECT_HTTP_AUTHORIZATION'] rather than $_SERVER['HTTP_AUTHORIZATION']. See When setting environment variables in Apache RewriteRule directives, what causes the variable name to be prefixed with "REDIRECT_"?
This took me a long time to crack, since it's not documented under mod_proxy or mod_proxy_fcgi.
Add the following directive to your apache conf or .htaccess:
CGIPassAuth on
See here for details.
Recently I haven'd problem with this arch.
In my environement, the proxy to php-fpm was configured as follow:
<IfModule proxy_module>
ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9000/usr/local/apache2/htdocs/$1
ProxyTimeout 1800
</IfModule>
I fixed the issue set up the SetEnvIf directive as follow:
<IfModule proxy_module>
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9000/usr/local/apache2/htdocs/$1
ProxyTimeout 1800
</IfModule>
I didn't find any similar settings with mod_proxy_fcgi BUT it just works for me by default. It asks for user authorization (.htaccess as usual) and the php gets it, and works like with mod_php or fastcgi and pass-header. I don't know if I was helpful...
EDIT:
it only works on teszt.com/ when using the DirectoryIndex... If i pass the php file name (even if the index.php!) it just doesn't work, don't pass the auth to the php. This is a blocker for me, but I don't want to downgrade to apache 2.2 (and mod_fastgi) so I migrate to nginx (on this machine too).

Applying IP rules to HTTP only (and not HTTPS) with .htaccess

I have been setting up an IP blocklist reciently and I was wondering is it possible to block an IP that is connecting via HTTP and not to block them if they connect via HTTPS. There was a post on SO .Htaccess rules to redirect respective HTTP links to HTTP and HTTPS to HTTPS? which is similar but uses mod_rewrite which I have had horrible experience with and has only given me 500 errors in the past . Is there any way to do it with the standard format?
order allow,deny
allow from 192.168.1.0/24
deny from all
I need support for IPv6 addresses too. If the rewrite method is the only option, in your answer could you include a link that I could look at to perform my task properly? Many thanks!
I am using Apache/2.2.20 (Ubuntu)
What you desire isn't built into Apache's .htaccess mechanism. Simply: no protocol level commands are supported by mod_auth or mod_access. Furthermore, what you seek breaks the expected assumption that if you provide a resource over HTTP, that same path will work over HTTPS. This will cause surprising results for people using HTTPS enforcers.
But, if you're dead set on doing something like this, I would recommend Squid. You can use it to do all kinds of nifty things, like denying access to the cache from certain protocols on a per-file basis, and otherwise fiddling with data coming off your Apache server before you serve it to your users.

How can I redirect requests to specific files above the site root?

I'm starting up a new web-site, and I'm having difficulties enforcing my desired file/folder organization:
For argument's sake, let's say that my website will be hosted at:
http://mywebsite.com/
I'd like (have set up) Apache's Virtual Host to map http://mywebsite.com/ to the /fileserver/mywebsite_com/www folder.
The problem arises when I've decided that I'd like to put a few files (favicon.ico and robots.txt) into a folder that is ABOVE the /www that Apache is mounting the http://mywebsite.com/ into
robots.txt+favicon.ico go into => /fileserver/files/mywebsite_com/stuff
So, when people go to http://mywebsite.com/robots.txt, Apache would be serving them the file from /fileserver/mywebsite_com/stuff/robots.txt
I've tried to setup a redirection via mod_rewrite, but alas:
RewriteRule ^(robots\.txt|favicon\.ico)$ ../stuff/$1 [L]
did me no good, because basically I was telling apache to serve something that is above it's mounted root.
Is it somehow possible to achieve the desired functionality by setting up Apache's (2.2.9) Virtual Hosts differently, or defining a RewriteMap of some kind that would rewrite the URLs in question not into other URLs, but into system file paths instead?
If not, what would be the preffered course of action for the desired organization (if any)?
I know that I can access the before mentioned files via PHP and then stream them - say with readfile(..), but I'd like to have Apache do as much work as necessary - it's bound to be faster than doing I/O through PHP.
Thanks a lot, this has deprived me of hours of constructive work already. Not to mention poor Apache getting restarted every few minutes. Think of the poor Apache :)
It seems you are set to using a RewriteRule. However, I suggest you use an Alias:
Alias /robots.txt /fileserver/files/mywebsite_com/stuff/robots.txt
Additionally, you will have to tell Apache about the restrictions on that file. If you have more than one file treated this way, do it for the complete directory:
<Directory /fileserver/files/mywebsite_com/stuff>
Order allow,deny
Allow from all
</Directory>
Can you use symlinks?
ln -s /fileserver/files/mywebsite_com/stuff/robots.txt /fileserver/files/mywebsite_com/stuff/favicon.ico /fileserver/mywebsite_com/www/
(ln is like cp, but creates symlinks instead of copies with -s.)