Fix Insecure HTTP Methods on Web Servers - api

my client is asking:
Following web server are exposed to a number of different methods to end users that can expose the web service to varying degrees of risk. Acceptable web methods are typically GET, POST and CONNECT (in the case of HTTPS).
• Server a • server b
It is found that the OPTIONS HTTP method is available on the web servers The OPTIONS method allows an attacker to enumerate the available methods on the web servers which allow servers to accept the TRACE method and leave themselves vulnerable to HTTP TRACE Cross-Site Scripting vulnerability. This is because the TRACE method simply echoes the user-supplied input back to the end user.
Now how to disable this method , how to check these and will there be any downtime for this to change.
server is running centos

first check Trace and options methods whether it is enable.
curl -i -X TRACE <URL>
curl -i -X OPTIONS <URL>
If http response is 200 then these methods are enable.
To disable and only to allow GET POST and CONNECT
The first thing to do is make sure that mod_rewrite is loaded. If mod_rewrite.so is missing from your apache configuration but you have it installed, (and your install location is /usr/local/apache), then add the following statement to your httpd.conf:
LoadModule rewrite_module "/usr/local/apache/modules/mod_rewrite.so"
Then add the following as well to your httpd.conf file or within < virtualhost>...< /virtualhost>:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_METHOD} !^(GET|POST|CONNECT)
RewriteRule .* - [F]
</IfModule>
Or only to disable TRACE
For apache2 this can be done adding to the main httpd.conf file the following:
TraceEnable off
Restart apache

Related

How to disable Apache HTTP Header info in AWS Load Balancer Response?

I have a node.js environment deployed using AWS Elastic Beanstalk on an Apache server. I have run a PCI scan on the environment and I'm getting 2 failures:
Apache ServerTokens Information Disclosure
Web Server HTTP Header Information Disclosure
Naturally I'm thinking I need to update the httpd.conf file with the following:
ServerSignature Off
ServerTokens Prod
However, given the nature of Elastic Beanstalk and Elastic Load Balancers, as soon as the environment scales, adds new servers, reboots etc the instance config will be overwritten.
I have also tried putting the following into an .htaccess file:
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} =http
RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]
# Security hardening for PCI
Options -Indexes
ServerSignature Off
# Dissallow iFrame usage outside of loylap.com for PCI Security Scan
Header set X-Frame-Options SAMEORIGIN
On the node js side I use the "helmet" package to apply some security measures, I also use the "express-force-https" package to ensure the application is enforcing https. However, these only seem to be taking effect after the Express application is initiated and after the redirect.
I have Elastic Load Balancer listeners set up for both HTTP (port 80) and HTTPS (port 443), however the HTTP requests are immediately routed to HTTPS.
When I run the following curl command:
curl -I https://myenvironment.com --head
I get an acceptable response with the following line:
Server: Apache
However when I run the same request on the http endpoint (i.e. before redirects etc):
curl -I http://myenvironment.com --head
I get a response that discloses more information about my server than it should, and hence the PCI failure:
Server: Apache/2.4.34 (Amazon)
How can I force my environment to restrict the http header response on HTTP as well as HTTPS?
Credit to #stdunbar for leading me to the correct solution here using ebextensions.
The solution worked for me as follows:
Create a file in the project root called .ebextensions/01_server_hardening.config
Add the following content to the file:
files:
"/etc/httpd/conf.d/03_server_hardening.conf":
mode: "000644"
owner: root
group: root
content: |
ServerSignature Off
ServerTokens Prod
container_commands:
01_reload_httpd:
command: "sudo service httpd reload"
(Note: the indentation is important in this YAML file - 2 spaces rather than tabs in the above code).
During elastic beanstalk deployment, that will create a new conf file in /etc/httpd/conf.d folder which is set up to extend the httpd.conf settings in ELB by default.
The content manually turns off the ServerSignature and sets the ServerTokens to Prod, achieving the PCI standard.
Running the container command forces a httpd reboot (for this particular version of Amazon linux - ubuntu and other versions would require their own standard reload).
After deploying the new commands to my EB environment, my curl commands run as expected on HTTP and HTTPS.
An easier and better solution exists now.
The folder /etc/httpd/conf.d/elasticbeanstalk is deleted when the built-in application server is restarted (e.g. when using EB with built-in Tomcat). Since .ebextensions are not re-run the above solution stop working.
This is only the case when the application server is restarted (through e.g. Lambda or the Elastic Beanstalk web-console). If the EC2 instance is restarted this is not an issue.
The solution is to place a .conf file in a sub-folder in the .ebextensions.
.ebextensions
httpd
conf.d
name_of_your_choosing.conf
Content of the file is the same as the output of the .ebextensions above, e.g.
ServerSignature Off
ServerTokens Prod
This solution will survive a restart of the application server and is much easier to create and manage.
You will ultimately need to implement some ebextensions to have this change applied to each of your Beanstalk instances. This is a mechanism that allows you to create one or more files that are run during the initialization of the beanstalk. I have an older one that I have not tested in your exact situation but it does the HTTP->HTTPS rewrite like you're showing. It was used in the Tomcat Elastic Beanstalk type - different environments may use different configurations. Mine looks like:
files:
"/tmp/00_application.conf":
mode: "000644"
owner: root
group: root
content: |
<VirtualHost *:80>
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R,L]
</VirtualHost>
container_commands:
01_enable_rewrite:
command: "echo 'LoadModule rewrite_module modules/mod_rewrite.so' >> /etc/httpd/conf/httpd.conf"
02_cp_application_conf:
command: "cp /tmp/00_application.conf /etc/httpd/conf.d/elasticbeanstalk/00_application.conf"
Again, this is a bit older and has not been tested for your exact use case but hopefully it can get you started.
This will need to be packaged with your deployment - i.e. in Java a .jar or .war or a .zip in other environments. Take a look at the documentation link to learn more about deployments.
There is a little change in configuration file path as AWS has introduced Amazon Linux 2
.ebextentions
.platform
httpd
conf.d
whateverFilenameyouwant.conf
in .platform/httpd/conf.d/whatever-File-NameYouWant.conf
add below two line
ServerSignature Off
ServerTokens Prod
Above is for Apache
Since AWS by default uses nginx for reverse proxy
replace httpd to nginx it should work

How to disable HTTP 1.0 protocol in Apache?

HTTP 1.0 has security weakness related to session hijacking.
I want to disable it on my web server.
You can check against the SERVER_PROTOCOL variable in a mod-rewrite clause. Be sure to put this rule as the first one.
RewriteEngine On
RewriteCond %{SERVER_PROTOCOL} ^HTTP/1\.0$
RewriteCond %{REQUEST_URI} !^/path/to/403/document.html$
RewriteRule ^ - [F]
The additional negative check for !^/path/to/403/document.html$ is so that the forbidden page can be shown to the users. It would otherwise lead to a recursion.
If you are on a name-based virtual host (and each virtual server does not have its own separate IP address), then it is technically impossible to connect to your virtual host using HTTP/1.0; Only the default server --the first virtual server defined-- will be accessible. This is because HTTP/1.0 does not support the HTTP "Host" request header, and the Host header is required on name-based virtual hosts in order to "pick" which virtual host the request is being addressed to. In most cases, the response to a true HTTP/1.0 request will be a 400-Bad Request.If you did manage to get that code working, but you later tried to use custom error documents (see Apache core ErrorDocument directive), then the result of blocking a request would be an 'infinite' loop: The server would try to respond with a 403-Forbidden response code, and to serve the custom 403 error document. But this would result in another 403 error because access to all resources --including the custom 403 page-- is denied. So the server would generate another 403 error and then try to respond to it, creating another 403, and another, and another... This would continue until either the client or the server gave up.
I'd suggest something like:
SetEnvIf Request_Protocol HTTP/1\.0$ Bad_Req
SetEnvIf Request_URI ^/path-to-your-custom-403-error-page\.html$
Allow_Bad_Req
#Order Deny,Allow
Deny from env=BadReq
Allow from env=Allow_Bad_Req
In mod_rewrite, something like:
RewriteCond %{THE_REQUEST} HTTP/1\.0$
RewriteCond %{REQUEST_URI} !^/path-to-your-custom-403-error-page\.html$
This will (note the FUTURE tense - as of October 2018) be possible with Apache 2.5, using the PolicyVersion directive in mod_policy. The PolicyVersion directive sets the lowest level of the HTTP protocol that is accepted by the server, virtual host, or directory structure - depending on where the directive is placed.
First enable the policy module:
a2enmod mod_policy
Then in the server config, vhost, or directory (will not work in .htaccess), add:
PolicyVersion enforce HTTP/1.1
Finally restart the server:
systemctl restart apache2

HTTP Post request to aws ec2 directory /opt/lampp/htdocs/donate/ denied

I am trying to make a post request to http://localhost/donate/payment.php".
It works fine when I run the application locally
However when I change the URL to
"http://ec2-xx-xxx-xx-xxx.ap-southeast-2.compute.amazonaws.com/opt/lampp/htdocs/donate/payment.php"
I get page not found error. I can guarantee that the file is present in the location.
I have tried several things like changing the permission of the the /opt file recursively to 777. Also tried changing the apache server port default port from 80.
I even tried placing a .htacces file inside the donate folder to access the server. the contents are
RewriteEngine On
RewriteCond %{HTTP_HOST} ^yourdomain.com
RewriteRule (.*) http://www.yourdomain.com/$1 [R=301,L]
RewriteCond %{HTTP_HOST} ^www\.yourdomain\.com$
RewriteCond %{REQUEST_URI} !^/WebProjectFolder/
RewriteRule (.*) /WebProjectFolder/$1
All attempts have failed. Is there anything else I am missing here. I have installed bitnami parse server and I am able to access that by http in the browser. It is present in the folder /apps in the root folder.
Does AWS override any security permissions?
Assuming /opt/lampp/htdocs/is your document root, shouldn't the URL be http://ec2-xx-xxx-xx-xxx.ap-southeast-2.compute.amazonaws.com/donate/payment.php?
You might also want to verify a couple of things:
Make sure your security policy has its inbound port 80 open to the public (or where you'll be visiting from)
Assuming you're using Apache httpd, make sure it accepts connections on the external interface or all interfaces (e.g. Listen 80, Listen 0.0.0.0:80, etc)
First, if you actually get an error from your Apache server, the issue has nothing to do with AWS. If there were misconfigured security groups or NACL, you'd never reach port 80 (http).
Second, never ever chmod -R 777, not only can you break your app behavior, but also, especially with PHP, you just opened security risks. Yes, this doesn't matter until your instance becomes part of a botnet and starts sending spam.
At a glance, I would say your Apache configuration lacks something, like a VirtualHost "any":
# from https://httpd.apache.org/docs/2.4/vhosts/examples.html
<VirtualHost *:80>
DocumentRoot "/www/example1"
ServerName www.example.com
# Other directives here
</VirtualHost>
It seems like your default location points to another directory, possibly the default one.

Apache 2.4 + PHP-FPM and Authorization headers

Summary:
Apache 2.4's mod_proxy does not seem to be passing the Authorization headers to PHP-FPM. Is there any way to fix this?
Long version:
I am running a server with Apache 2.4 and PHP-FPM. I am using APC for both opcode caching and user caching. As recommended by the Internet, I am using Apache 2.4's mod_proxy_fcgi to proxy the requests to FPM, like this:
ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9000/foo/bar/$1
The setup works fine, except one thing: APC's bundled apc.php, used to monitor the status of APC does not allow me to log in (required for looking at user cache entries). When I click "User cache entries" to see the user cache, it asks me to log in, clicking on the login button displays the usual HTTP login form, but entering the correct login and password yields no success. This function is working perfectly when running with mod_php instead of mod_proxy + php-fpm.
After some googling I found that other people had the same issue and figured out that it was because Apache was not passing the Authorization HTTP headers to the external FastCgi process. Unfortunately I only found a fix for mod_fastcgi, which looked like this:
FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization
Is there an equivalent setting or some workaround which would also work with mod_proxy_fcgi?
Various Apache modules will strip the Authorization header, usually for "security reasons". They all have different obscure settings you can tweak to overrule this behaviour, but you'll need to determine exactly which module is to blame.
You can work around this issue by passing the header directly to PHP via the env:
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
See also Zend Server Windows - Authorization header is not passed to PHP script
In some scenarios, even this won't work directly and you must also change your PHP code to access $_SERVER['REDIRECT_HTTP_AUTHORIZATION'] rather than $_SERVER['HTTP_AUTHORIZATION']. See When setting environment variables in Apache RewriteRule directives, what causes the variable name to be prefixed with "REDIRECT_"?
This took me a long time to crack, since it's not documented under mod_proxy or mod_proxy_fcgi.
Add the following directive to your apache conf or .htaccess:
CGIPassAuth on
See here for details.
Recently I haven'd problem with this arch.
In my environement, the proxy to php-fpm was configured as follow:
<IfModule proxy_module>
ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9000/usr/local/apache2/htdocs/$1
ProxyTimeout 1800
</IfModule>
I fixed the issue set up the SetEnvIf directive as follow:
<IfModule proxy_module>
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9000/usr/local/apache2/htdocs/$1
ProxyTimeout 1800
</IfModule>
I didn't find any similar settings with mod_proxy_fcgi BUT it just works for me by default. It asks for user authorization (.htaccess as usual) and the php gets it, and works like with mod_php or fastcgi and pass-header. I don't know if I was helpful...
EDIT:
it only works on teszt.com/ when using the DirectoryIndex... If i pass the php file name (even if the index.php!) it just doesn't work, don't pass the auth to the php. This is a blocker for me, but I don't want to downgrade to apache 2.2 (and mod_fastgi) so I migrate to nginx (on this machine too).

Disabling TRACE request method on Apache/2.0.52

By default, Apache 2.0.52 will respond to any HTTP TRACE request that it receives. This is a potential security problem because it can allow certain types of XSS attacks. For details, see http://www.apacheweek.com/issues/03-01-24#news
I am trying to disable TRACE requests by following the instructions shown in the page linked to above. I added the following lines of code to my http.conf file, and restarted apache:
RewriteEngine On
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
However, when I send a TRACE request to my web server, it seems to ignore the rewrite rules and responds as if TRACE requests were still enabled.
For example:
[admin2#dedicated ~]$ telnet XXXX.com 80
Trying XXXX...
Connected to XXXX.com (XXXX).
Escape character is '^]'.
TRACE / HTTP/1.0
X-Test: foobar
HTTP/1.1 200 OK
Date: Sat, 11 Jul 2009 17:33:41 GMT
Server: Apache/2.0.52 (Red Hat)
Connection: close
Content-Type: message/http
TRACE / HTTP/1.0
X-Test: foobar
Connection closed by foreign host.
The server should respond with 403 Forbidden. Instead, it echoes back my request with a 200 OK.
As a test, I changed the RewriteCond to %{REQUEST_METHOD} ^GET
When I do this, Apache correctly responds to all GET requests with 403 Forbidden. But when I change GET back to TRACE, it still lets TRACE requests through.
How can I get Apache to stop responding to TRACE requests?
Some versions require:
TraceEnable Off
I figured out the correct way to do it.
I had tried placing the block of rewrite directives in three places: in the <Directory "/var/www/html"> part of the httpd.conf file, at the top of my httpd.conf file, and in the /var/www/html/.htaccess file. None of these three methods worked.
Finally, however, I tried putting the block of code in <VirtualHost *:80> part of my httpd.conf. For some reason, it works when it is placed. there.
As you've said, that works in your VirtualHost block. As you didn't show httpd.conf I can't say why your initial attempt didn't work - it's context-sensitive.
It failed in the because it's not really relevant there, that's generally for access control. If it didn't work in the .htaccess it's likely that apache wasn't looking for it (you can use AllowOverride to enable them).