Can someone please help me with setting rules such that i get only the data which is being posted using POST. I have a form where i am submitting name and email id. I want to save just that part to be saved in the log file. In my scenario i just want below data in my log file:-
--29000000-C--
name1=ssn&email1=ssn%40gmail.com
--29000000-F--
HTTP/1.1 200 OK
X-Powered-By: PHP/7.2.4
Content-Length: 16
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
My present mod_security looks like:-
<IfModule security2_module>
#Enable the module.
SecRuleEngine On
SecAuditEngine on
#Setup logging in a dedicated file.
SecAuditLog C:/wamp64/bin/apache/apache2.4.33/logs/website-audit.log
#Allow it to access requests body.
SecRequestBodyAccess on
SecAuditLogParts ABIFHZ
#Setup default action.
SecDefaultAction "nolog,noauditlog,allow,phase:2"
#Define the rule that will log the content of POST requests.
SecRule REQUEST_METHOD "^POST$" "chain,allow,phase:2,id:123"
SecRule REQUEST_URI ".*" "auditlog"
</ifmodule>
I found a solution to my question. We can set below field as per our requirement:-
SecAuditLogParts ABIFHZ
In my case i set the field as:-
SecAuditLogParts C
however it will display as:-
--84670000-A--
[29/Aug/2018:14:49:58 +0200] W4aWdqHJuCcOQzTIgCiEqAAAAD8 127.0.0.1 60735 127.0.0.1 80
--84670000-C--
name1=red&email1=red%40yahoo.com
--84670000-Z--
Related
I have hosted a static webpage on Gitlab pages. The URL of the webpage is myname.gitlab.io
I have another website hosted with hostgator which has the URL "mysecondwebsite.com". "mysecondwebsite.com" has thousands of static html pages hosted on the various paths like "mysecondwebsite.com/charts/folder1/1.html", "mysecondwebsite.com/charts/folder1/2.html", "mysecondwebsite.com/charts/folder1/3.html" & so on.
I don't want "mysecondwebsite.com" to be accessible directly nor the pages in it. Hence, I've enabled hotlink protection which works as expected. Now, I also want to allow access to "mysecondwebsite.com" ONLY FROM myname.gitlab.io. This website has list of hyperlinks which when clicked should open anapprpriate page in "mysecondwebsite.com". To achieve this, I've entered the following in .htaccess file on hostgator which isn't helping. I see 403 forbidden
# IP to allow
order allow,deny
deny from all
allow from gitlab.io
Current hotlink protection settings -
# DO NOT REMOVE THIS LINE AND THE LINES BELOW HOTLINKID:r2xGl7fjrh
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?mysecondwebsite.com/.*$ [NC]
RewriteRule .*\.(.*|jpg|jpeg|gif|png|bmp|tiff|avi|mpeg|mpg|wma|mov|zip|rar|exe|mp3|pdf|swf|psd|txt|html|htm|php)$ https://mysecondwebsite.com [R,NC]
# DO NOT REMOVE THIS LINE AND THE LINES ABOVE r2xGl7fjrh:HOTLINKID
I am in no way an expert with web hosting. Please could I get some help to get this working.
UDPATED htaccess
Options All -Indexes
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^http(s)?://((myfirstwebsite\.com)|((www\.)?mysecondwebsite\.com))/ [NC]
RewriteRule .* - [F]
HTTP LIVE HEADER DUMP
https://mysecondwebsite.com/charts/thisfolder/thisfile.html
Host: mysecondwebsite.com
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Alt-Used: mysecondwebsite.com
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: cross-site
Sec-Fetch-User: ?1
GET: HTTP/2.0 403 Forbidden
cache-control: private, no-cache, no-store, must-revalidate, max-age=0
pragma: no-cache
content-type: text/html
content-length: 699
date: Wed, 06 Apr 2022 07:13:17 GMT
server: LiteSpeed
content-security-policy: upgrade-insecure-requests
alt-svc: h3=":443"; ma=2592000, h3-29=":443"; ma=2592000, h3-Q050=":443"; ma=2592000, h3-Q046=":443"; ma=2592000, h3-Q043=":443"; ma=2592000, quic=":443"; ma=2592000; v="43,46"
X-Firefox-Spdy: h2
---------------------
https://mysecondwebsite.com/favicon.ico
Host: mysecondwebsite.com
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0
Accept: image/avif,image/webp,*/*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Alt-Used: mysecondwebsite.com
Connection: keep-alive
Referer: https://mysecondwebsite.com/charts/thisfolder/thisfile.html
Sec-Fetch-Dest: image
Sec-Fetch-Mode: no-cors
Sec-Fetch-Site: same-origin
GET: HTTP/3.0 404 Not Found
content-type: text/html
last-modified: Mon, 28 Mar 2022 13:48:20 GMT
etag: "999-6241bca4-dfd29bee5117e228;br"
accept-ranges: bytes
content-encoding: br
vary: Accept-Encoding
content-length: 911
date: Mon, 04 Apr 2022 10:11:14 GMT
server: LiteSpeed
content-security-policy: upgrade-insecure-requests
alt-svc: h3=":443"; ma=2592000, h3-29=":443"; ma=2592000, h3-Q050=":443"; ma=2592000, h3-Q046=":443"; ma=2592000, h3-Q043=":443"; ma=2592000, quic=":443"; ma=2592000; v="43,46"
X-Firefox-Http3: h3
---------------------
allow from gitlab.io doesn't work on the http referer header like you seem to be expecting. Rather it works based on the IP address of user making the request.
Instead you want to use something that checks the referer and denies access when it doesn't contain myname.gitlab.io or your own website's host name. You can do that with mod_rewrite by placing the following in your .htaccess file:
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^http(s)?://((myname\.gitlab\.io)|((www\.)?mysecondwebsite\.com))/ [NC]
RewriteRule .* - [F]
This would allow referrers from your gitlab site, and would then allow those pages to fetch further resources such as images, js, and css. In this rule:
RewriteEngine on - turns on rewrites, this needs to be specified once in your .htaccess and is shared between all the rewrite rules and conditions
RewriteCond - specifies a condition for the next rewrite rule
! says that the following regular expression should be negated (not matched)
^ is the beginning of the regular expression
NC is "no case" meaning that this rule is case insensitive and will work for both upper-case and lower-case input
RewriteRule is the actual rule
.* says that it matches all URLs (in this case the condition specified above it what matters)
- means that there is no destination URL
F says that it should show the "forbidden" status as opposed to redirecting or internally changing the URL.
The problem with this approach is that it will forbid some requests that actually are referred from gitlab. Not all browsers actually send a referer header in all circumstances.
Please could you share what the exception rule script is that you're thinking?
This is just an alternative to #StephenOstermiller's excellent answer...
You could instead keep your existing "hotlink protection" script unaltered, as generated by your control panel GUI (and make any changes through the GUI as required). But include an additional rule before your hotlink protection to make an exception for any domains you need to give access to.
# Abort early if request is coming from an "allowed" domain
RewriteCond %{HTTP_REFERER} ^https://myname\.gitlab\.io($|/)
RewriteRule ^ - [L]
# Normal hotlink-protection follows...
This prevents the hotlink protection from being processed when the request is coming from the allowed domain. So access is permitted.
This does assume you have no other directives that should be processed, following this rule.
In my application I have implemented mod security and as it's generic for few URL I have blocked few rules for particular location (URL). But I am OWASP error with below URL and not getting able or finding the way to block rules for this URL.
So please help me to block the rule for the below issue. The error log is given below. Thanks in advance.
POST / HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 499
Host: accountingdev.com
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.4.1 (Java/1.8.0_72)
Cookie: FA512c3d57865ef2662e9b1421f5c4d8ad=3pr1b0illdbem2kq9f99kfrpn2
Accept-Encoding: gzip,deflate
--a42de647-C--
logoutRequest=%3Csamlp%3ALogoutRequest+xmlns%3Asamlp%3D%22urn%3Aoasis%3Anames%3Atc%3ASAML%3A2.0%3Aprotocol%22+ID%3D%22LR-24-AHmvxyBBAudEaobzuTMpXrdPtmmVhiUU1ed%22+Version%3D%222.0%22+IssueInstant%3D%222016-10-14T17%3A20%3A00Z%22%3E%3Csaml%3ANameID+xmlns%3Asaml%3D%22urn%3Aoasis%3Anames%3Atc%3ASAML%3A2.0%3Aassertion%22%3E%40NOT_USED%40%3C%2Fsaml%3ANameID%3E%3Csamlp%3ASessionIndex%3EST-47-zdTNWjTqaSAbtxbpBPca-abc.com%3C%2Fsamlp%3ASessionIndex%3E%3C%2Fsamlp%3ALogoutRequest%3E
--a42de647-F--
HTTP/1.1 302 Found
X-Frame-Options: SAMEORIGIN
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store
Pragma: no-cache
Location: https://portal.com/caa/login?service=https%3A%2F%2Faccountingdev..com%2F
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
csrf-token: D=20647 t=1476445738526589
Content-Length: 493
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
You could write a rule like this:
SecRule REQUEST_METHOD "POST" "phase:1,id:1234,pass,log,chain"
SecRule REQUEST_URI "/" "ctl:ruleEngine=Off"
This should turn ModSecurity off for any POST requests made to the home page (untested). This is of course quite broad and may remove protection from other POST requests made to the home page that you want to check.
Alternatively you could do this to only search for this specific request:
SecRule REQUEST_METHOD "POST" "phase:2,id:1234,pass,log,chain"
SecRule REQUEST_URI "/"
SecRule ARG_POSTS:logoutRequest "LogoutRequest" "ctl:ruleEngine=Off"
However this would need to be a phase 2 rule, to look at the POST arguments - which are in the BODY and so not available in phase 1. This may mean that phase 1 rules fire before it even gets to this rule.
A much better idea is to tune the rules that are firing, but that involves telling us which rules they are which you seem hesitant to do. So can't help you much with that until you do.
I have setup apache2 with django and mod_wsgi in Debian Wheezy. I enabled mod_mem_cache with this configuration:
<IfModule mod_mem_cache.c>
CacheEnable mem /
MCacheSize 400000
MCacheMaxObjectCount 100
MCacheMinObjectSize 1
MCacheMaxObjectSize 500000
CacheIgnoreNoLastMod On
CacheIgnoreHeaders Set-Cookie
</IfModule>
based on the fact that MCacheMaxStreamingBuffer is the smaller of 100000 or MCacheMaxObjectSize as stated in the docs.
When I try hitting a page with size 3.3KB I get these response headers in firebug:
Connection Keep-Alive
Content-Encoding gzip
Content-Type text/html; charset=utf-8
Date Wed, 27 Aug 2014 14:47:39 GMT
Keep-Alive timeout=5, max=100
Server Apache/2.2.22 (Debian)
Transfer-Encoding chunked
Vary Cookie,Accept-Encoding
and the page isn't served from cache. In the page source there is however the correct header 'Cache-Control: max-age=300,must-revalidate' but doesn't show up in firebug.
In apache log I only see correctly:
[info] mem_cache: Cached url: https://83.212.**.**/?
With another test page that I created outside of django that doesn't have chunked encoding as a header, caching works fine. Why is the page not served from cache? Has anyone seen something similar?
I am starting to learn about http correctly.
I am working in lamp stack.
On the command line i am requesting a local page which will be served with apache to see the headers that are returned.
curl -i local.testsite
The page i am requesting has no content and i am not setting any headers but there are already a lot of headers sent in the response such as:
HTTP/1.1 200 OK
Date: Thu, 17 Jan 2013 20:28:52 GMT
Server: Apache/2.2.22 (Ubuntu)
X-Powered-By: PHP/5.3.10-1ubuntu3.4
Vary: Accept-Encoding
Content-Length: 0
Content-Type: text/html
So if i am not setting these, does apache set these automatically?
Yes Apache is setting those by default. By the way, if you only care about the headers, you should use
curl -I local.testsite
-I returns the headers only (HTTP HEAD request), such that even if you had content on the page you would only get the header.
Some are set by PHP:
The X-Powered-By header is set by the expose_php INI setting.
The Content-Type header is set by the default_mimetype INI setting.
The others are set by Apache:
The Server header is set by the ServerSignature directive.
The Vary: Accept-Encoding header is usually sent when mod_deflate is enabled.
Date and Content-Length are not configurable as they are part of the HTTP spec. Date is included as a MUST (except under some conditions) and Content-Length as a SHOULD.
See also How to remove date header from apache? and How to disable the Content-Length response header with Apache?.
I had my site tested with the Page Speed app from Google and one of the suggestions was to specify the character set in the HTTP Content-Type response header claiming it was better than just in a meta tag.
Here's what I understand I need to write:
Content-Type: text/html; charset=UTF-8
..but where exactly should I put this? I'm on a shared server.
Thank you!
Apache: add to your .htaccess file in root directory:
AddDefaultCharset UTF-8
It will modify the header from this:
Content-Type text/html
...to this:
Content-Type text/html; charset=UTF-8
nginx [doc] [serverfault Q]
server {
# other server config...
charset utf-8;
}
add charset utf-8; to server block (and reload nginx config)
When i added this, my response header looked like this:
HTTP/1.1 200 OK
Content-Type: text/html,text/html;charset='UTF-8'
Vary: Accept-Encoding
Server: Microsoft-IIS/7.5
With Apache, you use http://httpd.apache.org/docs/2.2/mod/core.html#adddefaultcharset
With IIS you edit the MIME type for the filetype in the list of files.
With most server-side technologies like PHP or ASP.NET there's a method or property provided by that technology. For example in ASP.NET you can set it in config, page, or page code-behind.