I have my Virtual Machine with Ubuntu 20.04 installed. In it I'm using Apache2 web server as a WAF with ModSecurity 2.9.3 module that uses OWASP rules, it is listening to port 80 and 443.
Then I have installed XAMPP to use in it the DVWA application. Since the Apache web server in XAMPP can't listen to the same ports of Apache2 (there would be a conflict), Apache web server is listening to 8012 for HTTP and 4431 for HTTPS.
I got through the process to make the DVWA requests passing through the WAF and reach the application by using VirtualHosts. Everything is okay at the moment. The Project involves the testing of the WAF against malcious attacks, and it works fine for mosts of the attacks, ModSecurity detects and blocks SQL Injection, XSS , FLI, RFI, and so on, but when I test the DDoS attack, it detects it as a SCANNER attack and I don't know why (and then it blocks the attack).
I tested the DDoS attack with WFuzz and Hydra, trying to guess the password of the admin in a particular page of DVWA.
For WFuzz I used this command :
wfuzz -c -w ~/SecLists/Passwords/probable-v2-top207.txt -b 'security=low;
PHPSESSID=cookieSession' 'http://localhost/dvwa/vulnerabilities/brute/
?username=admin&password=FUZZ&Login=Login'
Wfuzz starts sending the requets which get immediately blocked (error 443) and in the log error file of Apache2, I get some messages referred to the attack of this type ( + other messages related to what ModSec does) :
[Thu Apr 07 09:50:33.960952 2022] [:error] [pid 3803:tid 140532596565760]
[client 127.0.0.1:47826] [client 127.0.0.1] ModSecurity: Warning. Match of "rx
^(?:urlgrabber/[0-9\\\\.]+ yum/[0-9\\\\.]+|mozilla/[0-9\\\\.]+ ecairn-grabber/[0-9\\\\.]+
\\\\(\\\\+http://ecairn.com/grabber\\\\))$" against "REQUEST_HEADERS:User-Agent"
required. [file "/usr/share/modsecurity-crs/rules/REQUEST-913-SCANNER-DETECTION.conf"]
[line "55"] [id "913100"] [msg "Found User-Agent associated with security scanner"]
[data "Matched Data: Wfuzz found within REQUEST_HEADERS:User-Agent: Wfuzz/2.4.5"]
[severity "CRITICAL"] [ver "OWASP_CRS/3.4.0-dev"] [tag "application-multi"] [tag
"language-multi"] [tag "platform-multi"] [tag "attack-reputation-scanner"] [tag
"paranoia-level/1"] [tag "OWASP_CRS"] [tag "capec/1000/118/224/541/310"] [tag
"PCI/6.5.10"] [hostname "localhost"] [uri "/dvwa/vulnerabilities/brute/"] [unique_id
"Yk6XyWgIm0NzeaCPwEBVrAAAAE0"]
At first I didn't know why the detection was related to a SCANNER attack. From
[msg "Found User-Agent associated with security scanner"][data "Matched Data: Wfuzz found within REQUEST_HEADERS:User-Agent: Wfuzz/2.4.5"]
I thought that the problem was related to the User-Agent: Wfuzz/2.4.5.So I tried a custom User-Agent :
wfuzz -c -w ~/SecLists/Passwords/probable-v2-top207.txt -b 'security=low;
PHPSESSID=cookieSession' -H 'User-Agent:Mozilla/5.0 (X11; Ubuntu; Linux
x86_64; rv:98.0) Gecko/20100101 Firefox/98.0' 'http://localhost/dvwa/vulnerabilities
/brute/?username=admin&password=FUZZ&Login=Login'
However this time, the requets were received by the DVWA (error 200), but even after like 100 requests ModSecurity was not detecting the DDoS attack (I was not getting 443 error), even though "I" was doin lots of requests in the application. Using Hydra I get basically the same result. Any suggestions/help is very appreciated, this is not a life problem because I can delete this part from the project but I would like to know what's not functioning. I've been trying looking at OWASP rules files but I was getting nowhere with them and basically everyone in the web say that no one should be messing/editing those rules.
My goal would be :
After ModSecurity detects 20 tries, it blocks the upcoming requests, sending
back to the attacker 443 error.
OWASP ModSecurity Core Rule Set Developer on Duty here. First thing to note, ModSecurity version 2.9.3 is quite old (2018!). The current v2 release is version 2.9.5, which features important security and bug fixes. You should seriously consider using the latest version for anything beyond a private sandbox.
Second thing to note, OWASP Core Rule Set (CRS) version 3.4 is our development branch. It's under heavy development and is being radically changed right now (this very week, even). You'll probably want to use our latest official release, which is version 3.3.2 (see https://github.com/coreruleset/coreruleset/releases/tag/v3.3.2).
Wfuzz is indeed listed as a scanner (you can find it listed in the file rules/scanners-user-agents.data). CRS rule 913100 inspects the User-Agent headers of requests and compares them against the contents of scanners-user-agents.data. That's why you're seeing log lines containing "Found User-Agent associated with security scanner": the presence of "Wfuzz" causes that rule to match. You can find the rule in the file rules/REQUEST-913-SCANNER-DETECTION.conf, if you're interested.
ModSecurity is not designed to prevent (D)DoS attacks. It can be made to do so, but it isn't good at it. In fact, just a few days ago, we removed the DoS capabilities from the CRS v3.4/dev branch (like I said, the dev branch is being radically changed as we speak!). For an explanation on this topic, see https://github.com/coreruleset/dos-protection-plugin-modsecurity-v2#plugin-expectations-suitability-and-scale.
If you really want to test DoS rules, take a look at the CRS 3.3.2 release, which still included our DoS rules as standard (in the file rules/REQUEST-912-DOS-PROTECTION.conf). Find the section "Anti-Automation / DoS Protection" in the main configuration file, crs-setup.conf, to configure the DoS rules for use. There are also blog posts and tutorials available online that walk through how to write your own DoS protection rules for ModSecurity, similar to what you described (e.g. "detect 20 requests, then block further requests"). It just requires leveraging ModSecurity's persistent collections mechanism, to store state information between requests (e.g. a running count of "how many requests has this IP address sent?").
But, to reiterate the point: neither ModSecurity or OWASP CRS will stop DoS attacks by default.
Good luck with your project!
Related
I'm trying to restrict an url called by Docusign event when a document is completed.
I want to only give access to this url by Docusign host or ip but i'm unable to do so because of my limited skills on Apache.
By following this documentation https://www.docusign.com/trust/security/esignature
I've tried to add this line in my vhost :
<LocationMatch "^/souscription/api/[^/].*/callback/.*$">
Require host docusign.com docusign.net
</LocationMatch>
But I have this error in apache log:
[Wed Jul 29 12:59:09.663648 2020] [authz_host:error] [pid 32671] [client 162.248.186.11:50836] AH01753: access check of 'docusign.com docusign.net' to /souscription/api/1.0/callback/118/completed failed, reason: unable to get the remote host name
What's wrong with my config ?
For Apache questions, use superuser.com
When building a listening server for receiving DocuSign webhook messages, filtering by IP is not recommended since it leads to a brittle installation that can fail at exactly the wrong time. Instead:
Use the combination of the Basic Authentication and HMAC features to assure yourself that the message really came from DocuSign.
Or better, use an intermediate PaaS service to queue the notification messages. The additional feature is that you can receive the notification messages from behind your firewall with no changes to the firewall. See the example repo and associated blog posts.
I inherited maintenance of a self-written CGI application without
documentation and have never met the original author. The application
stopped working in Debian 8, but worked in Debian 7 and CentOS 5. The
main changes were the upgrade from Apache 2.2 (used by Debian 7 &
CentOS 5) to Apache 2.4 (used by Debian 8) and the upgrade from perl
5.8 (in CentOS 5) respectively perl 5.14 (in Debian 7) to perl 5.20.
The problematic part boils down to the following script (a
302-redirect):
#!/usr/bin/perl
$|=1; # activate auto-flushing of stdout
use strict;
use warnings;
my $CRLF = "\015\012";
print STDOUT "Status: 302 Moved Temporarily$CRLF" .
"Location: /does_not_matter$CRLF" .
"URI: /does_not_matter$CRLF" .
"Connection: close$CRLF" .
"Content-type: text/html; charset=UTF-8$CRLF$CRLF";
close STDOUT;
while(1) {
sleep 1;
}
The observed behavior is that the redirect never reaches the client
as long as the script is still running when used with Apache 2.4, but
there is no error message in Apache's error.log. I changed the client
(Firefox, Chromium, wget), the Apache module (mod_cgid & mod_cgi),
sent additional headers, removed the close STDOUT, removed the
$|=1, replaced the $CRLF with \n and made the script
fork and exit the parent process (so that Apache is no longer the
parent process), all to no avail. The only things that worked: use
Apache 2.2, turn the script into an NPH-CGI-script (which has to send
complete HTTP headers that Apache will not modify in any way, even if
they contain errors), make the script exit instead of entering an
endless loop. I confirmed via tcpdump that the packages with the
redirect indeed never leave the server before the script is killed.
And from the Date-line in the response and the time of the eventual
arrival I gather that Apache receives my output immediately (&
immediately adds the Date-line to the headers), but does not send the
response to the client.
Don't bother answering, I already figured the solution out by myself
and will write an answer. I just want to make the solution available
to others who might encounter the same problem.
The problem was the missing message body. A 302 redirect may contain
a message body (see RFC 2616, section 4.3 (Message Body): "All other
responses do include a message-body, although it MAY be of zero
length"), but a Content-Length-line is optional (section 4.4 of RFC
2616 says the message body length can be determined by closing the
connection if the Content-Length-line is missing). Since Apache can
not know whether I want to send a message body or not, it has to wait
until I close the connection or actually send the message body
(Apache 2.2 apparently behaved erroneously here by not waiting for
the message body - or maybe close STDOUT; does not do in perl
5.20 what it did in older perl versions). The correct script should
therefore look like this (verified to work both in Apache 2.2 and in
Apache 2.4) - the only difference is an additional $CRLF which
terminates a zero-length message body:
#!/usr/bin/perl
$|=1; # activate auto-flushing of stdout
use strict;
use warnings;
my $CRLF = "\015\012";
print STDOUT "Status: 302 Moved Temporarily$CRLF" .
"Location: /doesNotMatter$CRLF" .
"URI: /doesNotMatter$CRLF" .
"Connection: close$CRLF" .
"Content-type: text/html; charset=UTF-8$CRLF$CRLF$CRLF";
close STDOUT;
while(1) {
sleep 1;
}
It was https://stackoverflow.com/a/8062277/2845840 that pointed me in the right direction.
I'm trying to get PingAccess set up as a proxy (let's call the PA host
pagateway) for a couple of applications that share a Web Session. I want all access to come via the PA pagateway and use HTTPS, but the back end systems are not HTTPS.
I have two sites defined, app1:8080 and app2:8080. Both are set to "secure" = no and "use target host header" = yes.
I have listeners defined on ports 5000 and 5001 that are both set to "secure" = yes.
The first problem I found is that when I access either app in this way (e.g. going to https://pagateway:5000), after successfully authenticating with PingFederate I end up getting redirected to the actual underlying host name (e.g. http://app1:8080), meaning any subsequent interactions with the app are not via PingAccess. For users outside the network they wouldn't even be able to do that because the app1 host wouldn't even be visible or accessible.
I thought maybe I needed to turn off "Use target host header" to false but Chrome prompts me to download a file that contains NAK, ETX, ETX, NUL, STX, STX codes, and in the PA logs I get an SSL error:
2015-11-20 11:13:33,718 DEBUG [6a5KYac2dnnY0ZpIl-3GNA] com.pingidentity.pa.core.transport.http.HttpServerHandler:180 - IOException reading sourceSocket
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)
...
I'm unsure exactly which part of the process the SSL error is coming from (between browser and pagateway, or pagateway and app1). I'm guessing maybe app1 is having trouble with the unexpected host header...
In another variation I turned off SSL on the PA listener (I also had to change the PingAccess call-back URL in the PingFederate client settings to be http). But when I accessed it via http://pagateway:5000 I got a generic PingFederate error message in the browser and a different error in the PA logs:
2015-11-20 11:37:25,764 DEBUG [DBxHnFjViCgLYgYb-IrfqQ] com.pingidentity.pa.core.interceptor.flow.InterceptorFlowController:148 - Invoking request handler: Scheme Validation for Request to [pagateway:5000] [/]
2015-11-20 11:37:25,764 DEBUG [DBxHnFjViCgLYgYb-IrfqQ] com.pingidentity.pa.core.interceptor.flow.InterceptorFlowController:200 - Exception caught. Invoking abort handlers
com.pingidentity.pa.sdk.policy.AccessException: Invalid request protocol.
at com.pingidentity.pa.core.interceptor.SchemeValidationInterceptor.handleRequest(SchemeValidationInterceptor.java:61)
Does anyone have any idea what I'm doing wrong? I'm kind of surprised about the redirection to the actual server name, to be honest, but after that I'm stumped about where to go from here.
Any help would be appreciated.
Have you contacted our support on this? It's sounding like something that will need to be dug into a bit deeper - but some high level suggestions I can make:
Take a look at a browser trace to determine when the redirect is happening to the backend site. Usually this is because there's a Location header in a redirect from the backend web server that (by nature) is an absolute URL but pointing to it instead of the externally facing hostname.
A common solution to this is setting Target Host Header to False - so it will receive the request unmodified from the browser, and the backend server should know to represent itself as that (if it behaves nicely behind a proxy).
If the backend server can't do that (which it sounds like it can't) - you should look at assigning rewriting rules to that application. More details on them are available here: https://support.pingidentity.com/s/document-item?bundleId=pingaccess-52&topicId=reference%2Fui%2Fpa_c_Rewrite_Rules_Overview.html. The "Rewrite Response Header Rule" in particular will rewrite Location headers in HTTP redirects.
FYI - The "Invalid request protocol." error you're seeing at bottom of your description could be due to a "Require HTTPS" flag on your defined Application.
Do you have the same issue if you add a trailing slash at the end (https://pagateway:5000/webapp/)? Your application server will rewrite the URL based on what it thinks is the true host. This is to get around some security related issues around directory listing.
Which application server are you using? All app servers are unique, but I'll provide instructions on how to resolve this with Tomcat.
Add a global rule that forces the application server to use the external facing host name. Here is a sample Groovy script:
def header = exc?.request?.header;
header?.setHost("pf.pingdemo.com:443");
anything();
In Tomcat's server.xml, add scheme="https" to the connection:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="443" scheme="https" />
Cheers,
Tam
My client is using Unleashedsoftware.com to connect to a Magento Store. But it gives this error.
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Body>
<SOAP-ENV:Fault>
<faultcode>WSDL</faultcode>
<faultstring>
SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://www.domain.com/index.php/api/v2_soap/index/wsdl/1/' : Premature end of data in tag definitions line 2
</faultstring>
</SOAP-ENV:Fault>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
When browsing http://www.domain.com/index.php/api/v2_soap/index/ Firebug gives me “500 Internal Service Error”.
When I browse http://www.domain.com/index.php/api/v2_soap/index/wsdl/1/, I am getting valid XML data.
I checked the server log files and it seems like:
[Thu Aug 30 22:22:25 2012] [warn] [client 92.92.92.92] mod_fcgid: stderr: in /home/doaminuser/public_html/lib/Zend/Soap/Server.php on line 762
I been searching for couple of days now and today I tried to duplicate the entire site to another test server, and it seems to be working! So that seems to be a server issue.
Please, anybody got any idea what could be the issue?
Is there any better way of debugging this issue, any sample code or debugging tips.
Magento version is 1.6.2
Thank you.
There's lots of times where Magento's SOAP API fails due to problems your Magento server has communicating with itself.
That is, PHP's SOAP implementation requires that the SOAP server itself fetch the WSDL file via http, and a local network configuration issue gets in the way of Magento fetching it's own WSDL.
You can debug this by SSHing into your Magento server, and running the following command
curl -l 'http://www.example.com/index.php/api/v2_soap/index/wsdl/1/' > /tmp/wsdl.xml
and then examining the wsdl.xml file. Because you're performing this from your web-server, you may get different results than when you're performing it from your local browser.
I had a similar problem when calling the URL
http://www.store.com/index.php/api/v2_soap/?wsdl
After some time I received the message 500 - Internal Server Error and a Premature end of script headers message in the apache error log.
After a whole day of research I figured out, that the Timeout-Directive of the Apache module (configured in httpd.conf on a Linux environment) was set to "20" which caused the server to send the 500 error after 20 seconds. The problem is, that in my case the Magento system needs a longer time to "crawl" through all wsdl.xml files in order to build the WSDL-output (if you are using Magento SOAPv2).
Maybe you should check your Timeout Directive..hope that helps.
"I have memories of this. What worked for me was to put the hostname
in /etc/hosts on the server plus the www alias on 127.0.0.1 However,
in this instance the server was in the building rather than in some
ISP place and the LAN had Windows computers on it. Windows users had
downloaded lots of trojan-virus-porn things that were spending the
whole time spamming the network so the real problem was with the
Windows computers on the network, not with the server or with Magento.
After fdisking the PC's the problem was solved."
Thank You I've been struggling for 2 days with this on magento 1.6 and Windows Server 2008 adding this line to the hosts file (C:\Windows\System32\drivers\etc) solved the issue for me:
127.0.0.1 www.Domain.com
also remember to fix your magento soap (role) because the Roles Resources doesn't save in 1.6 unless you fix this file:
MagentoRoot\app\code\core\Mage\Adminhtml\Block\Api\Tab\Rolesedit.php
replace this:
if (array_key_exists(strtolower($item->getResource_id()), $resources) && $item->getPermission() == 'allow') {
with this:
if (array_key_exists(strtolower($item->getResource_id()), $resources) && $item->getApiPermission() == 'allow') {
In my case the issue was the Mod_Security rule "PHP Easter Egg Access" was enabled.
Rule ID: 380800
Once disabled, the api access worked.
An indicator was in the Apache log file:
Jun 19 09:15:52 httpd[1024961]: [error] [client xyz.xyz.xyz.xyz] ModSecurity: [file "/usr/local/apache/conf/modsec/99_asl_jitp.conf"] [line "116"] [id "380800"] [rev "1"] [msg "Atomicorp.com WAF Rules - Virtual Just In Time Patch: PHP Easter Egg Access"] [data "phpe9568f35-d428-11d2-a769-00aa001acf42"] [severity "CRITICAL"] Access denied with code 403 (phase 2). Pattern match "php(?:e9568f3[56]-d428-11d2-a769-00aa001acf42|b8b5f2a0-3c92-11d3-a3a9-4c7b08c10000)" at REQUEST_URI. [hostname "www.yoursever.com"]...
Magento version: 1.7.0.2
PHP version: 5.3.26
More information about the PHP Easter Egg Access rule:
http://www.atomicorp.com/forums/viewtopic.php?f=3&t=5057
http://www.0php.com/php_easter_egg.php
For those wanting a quick test script to replicate the issue (useful when trying to convince your hosting provider that it's a problem on their end), use:
<?php
$server = new SoapServer("http://<url to your magento shop>/index.php/api/v2_soap/index/wsdl/1/");
?>
This is the line in /lib/Zend/Soap/Server.php that triggers the error.
In my case if you browsed to:
http://< url to your magento shop >/index.php/api/v2_soap/index/wsdl/1/
the xml was fine, but if you ran the above php script on the server, the error was given.
This error most often appeared for me while omitting www for domain given in Magento SOAP url. Url has to match base url specified in the Magento config.
Currently we have an Apache 2.2.3 server with mod_ssl 2.2.3 running Django, with users authenticating by using a x509 certificate.
So far the system is running perfectly except for a single user, who when trying to upload a file receives 400 Bad Request error, and the contents of the ssl_error_log regarding this operation are:
[<date>] [error] [client <client ip>] request failed: error reading the headers, referer: <referrer url>
The contents of the ssl_access_log are:
<client ip> - - [<date>] "POST <target page> HTTP/1.1" 400 321
Also, the user's browser is Firefox as far as I know.
I am completely unable to reproduce this bug and so far none of the other users have experienced it. Could you point out some reasons for this to happen?
I've experienced connectivity that stops the upstream after an X amount of bytes is sent. X was a pretty low value, as in enough to request some simple pages, but not to deal with ajax requests much less upload files. As far as I recall, this connectivity problem occurred only when tethering (from a specific Android phone, but I didnt even test other phones).
So if the upstream gets interrupted and the upload stalls, it makes sense apache would return this error, according to this post: "Apache waits a time equal to the Timeout directive (defaults to 5 minutes if not defined) for a response from the client. It is likely Apache is waiting for the CRLF that indicates the end of the headers, yet it is never received.."