Disabling TRACE request method on Apache/2.0.52 - apache

By default, Apache 2.0.52 will respond to any HTTP TRACE request that it receives. This is a potential security problem because it can allow certain types of XSS attacks. For details, see http://www.apacheweek.com/issues/03-01-24#news
I am trying to disable TRACE requests by following the instructions shown in the page linked to above. I added the following lines of code to my http.conf file, and restarted apache:
RewriteEngine On
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
However, when I send a TRACE request to my web server, it seems to ignore the rewrite rules and responds as if TRACE requests were still enabled.
For example:
[admin2#dedicated ~]$ telnet XXXX.com 80
Trying XXXX...
Connected to XXXX.com (XXXX).
Escape character is '^]'.
TRACE / HTTP/1.0
X-Test: foobar
HTTP/1.1 200 OK
Date: Sat, 11 Jul 2009 17:33:41 GMT
Server: Apache/2.0.52 (Red Hat)
Connection: close
Content-Type: message/http
TRACE / HTTP/1.0
X-Test: foobar
Connection closed by foreign host.
The server should respond with 403 Forbidden. Instead, it echoes back my request with a 200 OK.
As a test, I changed the RewriteCond to %{REQUEST_METHOD} ^GET
When I do this, Apache correctly responds to all GET requests with 403 Forbidden. But when I change GET back to TRACE, it still lets TRACE requests through.
How can I get Apache to stop responding to TRACE requests?

Some versions require:
TraceEnable Off

I figured out the correct way to do it.
I had tried placing the block of rewrite directives in three places: in the <Directory "/var/www/html"> part of the httpd.conf file, at the top of my httpd.conf file, and in the /var/www/html/.htaccess file. None of these three methods worked.
Finally, however, I tried putting the block of code in <VirtualHost *:80> part of my httpd.conf. For some reason, it works when it is placed. there.

As you've said, that works in your VirtualHost block. As you didn't show httpd.conf I can't say why your initial attempt didn't work - it's context-sensitive.
It failed in the because it's not really relevant there, that's generally for access control. If it didn't work in the .htaccess it's likely that apache wasn't looking for it (you can use AllowOverride to enable them).

Related

Jenkins links point to http instead of HTTPS resulting in login screen infinite loop

For some reason Jenkins is redirecting the login screen back to the login screen when a successful login is made. See the attached packet trace. If I give the expected URL using https instead of HTTP all the pages load fine.
I have Jenkins configured behind a reverse proxy using apache. The proxy redirects traffic at /jenkins to :8080/jenkins. The base url is set to https://domain/jenkins and the jenkins --prefix parameter is set to /jenkins. I appreciate any help!
I found a work around. There was an error in my rewrite in the http virtualhost in apache. With the extra slash removed Jenkins no works but it is still sending http packets into the proxy server which then need to be rewritten to https instead of just posting all https links. Works but there is still and issue with Jenkins unfortunately.
RewriteEngine on
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{SERVER_NAME}**/**$1 [R,L]
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
The slash in the Rewrite rule was causing the URL to be rewritten slightly wrong. Most web application I've used previously can tolerate this error evidently as I have only encountered this problem 3 years after initially setting up the server.

Fix Insecure HTTP Methods on Web Servers

my client is asking:
Following web server are exposed to a number of different methods to end users that can expose the web service to varying degrees of risk. Acceptable web methods are typically GET, POST and CONNECT (in the case of HTTPS).
• Server a • server b
It is found that the OPTIONS HTTP method is available on the web servers The OPTIONS method allows an attacker to enumerate the available methods on the web servers which allow servers to accept the TRACE method and leave themselves vulnerable to HTTP TRACE Cross-Site Scripting vulnerability. This is because the TRACE method simply echoes the user-supplied input back to the end user.
Now how to disable this method , how to check these and will there be any downtime for this to change.
server is running centos
first check Trace and options methods whether it is enable.
curl -i -X TRACE <URL>
curl -i -X OPTIONS <URL>
If http response is 200 then these methods are enable.
To disable and only to allow GET POST and CONNECT
The first thing to do is make sure that mod_rewrite is loaded. If mod_rewrite.so is missing from your apache configuration but you have it installed, (and your install location is /usr/local/apache), then add the following statement to your httpd.conf:
LoadModule rewrite_module "/usr/local/apache/modules/mod_rewrite.so"
Then add the following as well to your httpd.conf file or within < virtualhost>...< /virtualhost>:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_METHOD} !^(GET|POST|CONNECT)
RewriteRule .* - [F]
</IfModule>
Or only to disable TRACE
For apache2 this can be done adding to the main httpd.conf file the following:
TraceEnable off
Restart apache

Use apache http server reverse-proxy to send a request expected not to return a response

Need somebody to push me in the right direction.
We're using apache http server (http1) reverse-proxy to send a request to another http server (http2). The challenge is http2 is not expected to send an HTML page in the response back to http1.
The http2 log does show the request coming in. However, the http1 log results in HTTP 502 error:
Internal error (specific information not available): [client ] AH01102: error reading status line from remote server localhost:9001
[proxy:error] [client ] AH00898: Error reading from remote server returned by /app/myContext/LogMessage
Here's http2 log which returns HTTP status 200:
"GET /app/myContext/LogMessage HTTP/1.1" 200 -
Please note that those requests that result in an HTML page work fine.
What would you think should be an approach here? Maybe using reverse proxy is not a good choice for this type of request?
We have httpd.conf on http1 set up this way:
ProxyPass "/app/myContext/"
http://localhost:9001/app/myContext/"
ProxyPassReverse "/app/myContext/"
http://localhost:9001/app/myContext/"
Disable ErrorLog on http1 altogether:
ErrorLog /dev/null
Have you tried to have http1 ignore using mod_log_config? According to the example the format string might be:
CustomLog expr=%{REQUEST_STATUS} -eq 502 && %{REQUEST_URI} =~ /app\/myContext/ ...
Or the LogFormat string might work too:
LogFormat %!502...
(h/t to Avoid logging of certain missing files into the Apache2 error log)
Is your problem that http1 is emitting 502 to the requestor? In that case, maybe use an <If> and a custom ErrorDocument?
<If %{REQUEST_URI} =~ /app\/myContext/>ErrorDocument 502 'OK'</If>
Went with the following solution: In http2 re-route the LogMessage call to fetch a blank html page:
1. Create blankfile.html in the /htdocs directory.
2. In httpd.conf add this line:
RewriteRule ^.(app/myContext/LogMessage). /blankfile.html [L]
This works for us since the whole purpose of LogMessage is to log the request in http2 access_log.
Just'd like to thank you #cowbert for working so deligently with me on this!

How to disable HTTP 1.0 protocol in Apache?

HTTP 1.0 has security weakness related to session hijacking.
I want to disable it on my web server.
You can check against the SERVER_PROTOCOL variable in a mod-rewrite clause. Be sure to put this rule as the first one.
RewriteEngine On
RewriteCond %{SERVER_PROTOCOL} ^HTTP/1\.0$
RewriteCond %{REQUEST_URI} !^/path/to/403/document.html$
RewriteRule ^ - [F]
The additional negative check for !^/path/to/403/document.html$ is so that the forbidden page can be shown to the users. It would otherwise lead to a recursion.
If you are on a name-based virtual host (and each virtual server does not have its own separate IP address), then it is technically impossible to connect to your virtual host using HTTP/1.0; Only the default server --the first virtual server defined-- will be accessible. This is because HTTP/1.0 does not support the HTTP "Host" request header, and the Host header is required on name-based virtual hosts in order to "pick" which virtual host the request is being addressed to. In most cases, the response to a true HTTP/1.0 request will be a 400-Bad Request.If you did manage to get that code working, but you later tried to use custom error documents (see Apache core ErrorDocument directive), then the result of blocking a request would be an 'infinite' loop: The server would try to respond with a 403-Forbidden response code, and to serve the custom 403 error document. But this would result in another 403 error because access to all resources --including the custom 403 page-- is denied. So the server would generate another 403 error and then try to respond to it, creating another 403, and another, and another... This would continue until either the client or the server gave up.
I'd suggest something like:
SetEnvIf Request_Protocol HTTP/1\.0$ Bad_Req
SetEnvIf Request_URI ^/path-to-your-custom-403-error-page\.html$
Allow_Bad_Req
#Order Deny,Allow
Deny from env=BadReq
Allow from env=Allow_Bad_Req
In mod_rewrite, something like:
RewriteCond %{THE_REQUEST} HTTP/1\.0$
RewriteCond %{REQUEST_URI} !^/path-to-your-custom-403-error-page\.html$
This will (note the FUTURE tense - as of October 2018) be possible with Apache 2.5, using the PolicyVersion directive in mod_policy. The PolicyVersion directive sets the lowest level of the HTTP protocol that is accepted by the server, virtual host, or directory structure - depending on where the directive is placed.
First enable the policy module:
a2enmod mod_policy
Then in the server config, vhost, or directory (will not work in .htaccess), add:
PolicyVersion enforce HTTP/1.1
Finally restart the server:
systemctl restart apache2

how to block cross frame scripting in Apache for svn

I have SVN configured thru Apache 2.4.18 on Linux 6.6. Next i have to disable cross frame scripting for my svn url. SVN url is like https://servername/svn/projectA. I have compiled mod_security2.so and copied to /modules directory and loaded then in virtualHost have the lines below.
LoadFile /usr/lib64/libxml2.so
LoadFile /usr/lib64/liblua-5.1.so
LoadModule security2_module modules/mod_security2.so
httpd-vhosts.conf
<VirtualHost *:80>
ServerAdmin email#domain.com
DocumentRoot "/var/local/apache/httpd2.4.18/htdocs"
ServerName servername.fqdn.com
# For http to https redirect
Redirect / https://servername
TraceEnable off
RewriteEngine on
RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK)
RewriteRule .* - [F]
SecRuleEngine On
#SecFilterEngine On
#SecFilterForceByteRange 32 126
#SecFilterScanPOST On
#SecFilter "<( |\n)*script"
SecRequestBodyAccess On
SecResponseBodyAccess On
ErrorLog "logs/error_log"
CustomLog "logs/access_log" common
</VirtualHost>
The rules that Apache not supported are
SecFilterEngine
SecFilterForceByteRange
SecFilterScanPOST
SecFilter
Blockquote
Instead of SecFilterEngine, its taking SecRuleEngine. But I do not know alternative rule for other rules. I am using modsecurity-2.9.0 source compiled. The error i see is below. [root#server extra]# /var/local/apache/httpd2.4.18/bin/apachectl configtest
AH00526: Syntax error on line 45 of /var/local/apache/httpd2.4.18/conf/extra/httpd-vhosts.conf:
Invalid command 'SecFilterForceByteRange', perhaps misspelled or defined by a module not included in the server configuration. Any one know the mod_security2 supported modules for SecFilterForceByteRange, SecFilterScanPOST and SecFilter. I also read documentation about mod_security but could not figure out and solve the issue. I followed the url below.
http://www.unixpearls.com/how-to-block-xss-vulnerability-cross-site-scripting-in-apache-2-2-x/
[EDIT]
Its solved by adding the header response.
All those unsupported commands are ModSecurity v1 commands and have been completely rewritten for ModSecurity2.
The rule you would want would be something like this:
SecRule ARGS "<( |\n)*script" "phase:2,id:1234,deny"
This basically scans any of your arguments (as parameters or the body) for items like this:
<script
or
< script
or
<
script
That's not a bad start to trying to protect for XSS but is a bit basic.
OWASP has a Core Rule Set of ModSecurity rules and their XSS rules are much more complex and can be seen here: https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/master/base_rules/modsecurity_crs_41_xss_attacks.conf
XSS can be exploited in a number of ways, some of which will make it to your server (and this sort of thing might catch) and some which might not even make it to your server at all (and so which this can't protect against).
The best way to protect against XSS is to look at Content Security Policy, which allows you to explicitly say what javascript you want to allow on your site, and what not, and to explicitly deny in-line scripts if you want. This may require some clean up of your site to remove inline scripts and is not always the easiest to set up, particularly if loading third party assets and widgets on your site, but is the most robust protection.
The X-Frame-Options header is useful to stop your site being framed, and someone overlaying content to make you think you are clicking on the real site buttons and fields, but actually clicking their buttons. It's not really a form of XSS, since you are more putting scripting on an invisible window on top of your site rather than directly on your site, but can have similar effects. It's a good header to use.