I installed mod_security with OWASP rules on a Debian Jessie server, and experience the problem that it does not run the "runav.pl" script when I try to upload a file.
I modified the script to create /tmp/filewrite.txt with content of "Test text" when it is run. If I run it by hand it creates the file, but when I upload a file it does not create the above mentioned test file.
Here is the modified runav.pl script:
#!/usr/bin/perl
#
# runav.pl
# Copyright (c) 2004-2011 Trustwave
#
# This script is an interface between ModSecurity and its
# ability to intercept files being uploaded through the
# web server, and ClamAV
my $filename = '/tmp/filewrite.txt';
open(my $fh, '>', $filename);
print $fh "Test text\n";
close $fh;
$CLAMSCAN = "clamdscan";
if ($#ARGV != 0) {
print "Usage: modsec-clamscan.pl <filename>\n";
exit;
}
my ($FILE) = shift #ARGV;
$cmd = "$CLAMSCAN --stdout --disable-summary $FILE";
$input = `$cmd`;
$input =~ m/^(.+)/;
$error_message = $1;
$output = "0 Unable to parse clamscan output [$1]";
if ($error_message =~ m/: Empty file\.?$/) {
$output = "1 empty file";
}
elsif ($error_message =~ m/: (.+) ERROR$/) {
$output = "0 clamscan: $1";
}
elsif ($error_message =~ m/: (.+) FOUND$/) {
$output = "0 clamscan: $1";
}
elsif ($error_message =~ m/: OK$/) {
$output = "1 clamscan: OK";
}
print "$output\n";
And here is the related lines from modsecurity.conf:
SecRuleEngine DetectionOnly
SecServerSignature FreeOSHTTP
SecRequestBodyAccess On
SecRequestBodyLimit 20971520
SecRequestBodyNoFilesLimit 131072
SecRequestBodyInMemoryLimit 20971520
SecRequestBodyLimitAction Reject
SecPcreMatchLimit 1000
SecPcreMatchLimitRecursion 1000
SecResponseBodyAccess On
SecResponseBodyMimeType text/plain text/html text/xml
SecResponseBodyLimit 524288
SecResponseBodyLimitAction ProcessPartial
SecTmpDir /tmp/
SecDataDir /tmp/
SecUploadDir /opt/modsecuritytmp/
SecUploadFileMode 0640
SecDebugLog /var/log/apache2/debug.log
SecDebugLogLevel 3
SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus "^(?:5|4(?!04))"
SecAuditLogParts ABIJDEFHZ
SecAuditLogType Serial
SecAuditLog /var/log/apache2/modsec_audit.log
SecArgumentSeparator &
SecCookieFormat 0
SecUnicodeMapFile unicode.mapping 20127
SecStatusEngine On
Activated rules are under /etc/modsecurity/activated_rules, and all the other rules work well, but "modsecurity_crs_46_av_scanning.conf".
Does anyone have an idea why it does not do anything with uploaded file?
Related
I am trying to connect a payment gateway to my website but I am a beginner, they sent me an example code and I adjusted it to my site but when I try to make a payment I get this error:
" wsdl error: Getting https://190.0.195.24:9001/paymentgw/services/paymentgw?wsdl - HTTP ERROR: cURL ERROR: 51: SSL: certificate subject name 'seguro3.cpmp.com.gt' does not match target host name '190.0.195.24'
url: https://190.0.195.24:9001/paymentgw/services/paymentgw?wsdl"
This is my code:
require_once('./libsoap/nusoap.php');
$url = "https://190.0.195.24:9001/paymentgw/services/paymentgw?wsdl";
$client = new nusoap_client($url , 'wsdl' , false, false, false, false, 0, 25);
$client->authtype = 'certificate';
$client->certRequest['sslcertfile'] = '/var/www/vhosts/VisaKeys/iga.pem';
$client->certRequest['sslkeyfile'] = '/var/www/vhosts/VisaKeys/iga.key';
$client->certRequest['CACert'] = '/var/www/vhosts/VisaKeys/VisaNetCA.key';
$client->certRequest['verifypeer']=0;
$client->certRequest['passphrase']='pass';
$err = $client->getError();
if ($err) {
// Display the error
echo '<h2>Constructor error</h2><pre>' . $err . '</pre>';
// At this point, you know the call that follows will fail
}
if($result == FALSE)
{
echo "<center>";
$result = $client->call('authorizationRequest', $params);
$timing = time() - $start; // calculating the transaction time
//echo "<pre>".print_r($result, false) . "</pre>";
echo "<h5>Finish time: " . time() . " <br>";
echo "<h5>Total time: " . print_r($timing, true) . "<br>";
echo "Hubo un Error en su transaccion por favor intente nuevamente";
echo "<center>";
}
if ($client->fault)
{
echo '<h2>Fault</h2><pre>';
print_r($result);
echo '</pre>';
}
else
{
// Check for errors
$err = $client->getError();
if ($err)
{
// Display the error
echo '<h2>Error</h2><pre>' . $err . '</pre>';
}
}
SSL certificates validates domain names, not IP addresses.
You need to ignore certificate errors, or replace 190.0.195.24 with a hostname for which the server has a valid certificate.
seguro3.cpmp.com.gt is a good guess :-)
seguro3.cpmp.com.gt. 5379 IN A 190.0.195.24
I have lots of log files in C:\Logs.
I have IIS logs in each individual site folder (W3SVCXX, where XX is site id from IIS) but inside this same C:\Logs\ folder I have other log files from those same websites but logged via Log4Net as part of the application running at the website.
My current NXLog Config:
<Input i_IISSITES>
Module im_file
File "C:\Logs\u_ex*"
SavePos TRUE
Recursive TRUE
InputType LineBased
Exec $Hostname = 'stg-01-iom';
Exec if $raw_event =~ /^#/ drop(); \
else \
{ \
w3c->parse_csv(); \
$SourceName = "IIS"; \
$EventTime = parsedate($date + " " + $time); \
$Message = to_json(); \
}
</Input>
<Output o_PLAINTEXT>
Module om_udp
Host 10.50.108.32
Port 5555
</Output>
<Route r_IIS>
Path i_IISSITES => o_PLAINTEXT
</Route>
This works perfectly currently for IIS logs which match the filename U_ex*.
Folder structure:
c:\Logs\
- L009
- Dir1
- Dir2
- FileName.Log
- L008
- Dir3
- RandomFileName.Log
- L008
- W3SVC1
- u_ex1232.log
- W3SVC2
- W3SVC3
etc etc..
Now I'm able to be specific for IIS logs because the filename is u_ex* but with my other log files they could be called anything.
So for my other input I need to be able to target all .log but not u_ex.log.
Any ideas?
Thanks,
Michael
If you need to process all logs then instead of defining two input instances you could just use one and parse differently depending on the file name:
<Exec>
if file_name() =~ /u_ex\d+\.log$/
{
# parse as IIS
}
else
{
# parse as other
}
</Exec>
I am trying to play the video file with help of vlc-plugin.
Here is my code:-
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
my $file ='/home/abhishek/Videos/lua.mp4';
my $size = -s "$file";
my $begin=0;
my $end=$size;
(my $name, my $dir, my $ext) = fileparse($file, qr/\.[^.]*/);
open (my $fh, '<', $file)or die "can't open $file: $!";
binmode $fh;
print "Content-Type: application/x-vlc-plugin \n";
print "Cache-Control: public, must-revalidate, max-age=0";
print "Pragma: no-cache" ;
print "Accept-Ranges: bytes";
print "Content-Length: $end - $begin\n\n";
print "Content-Range: bytes $begin'-'$end'/'$size";
print "Content-Disposition: inline; filename=\"$name$ext\"\n";
print "Content-Transfer-Encoding: binary";
print "Connection: close";
my $cur=$begin;
seek($fh,$begin,0);
while(!eof($fh) && $cur < $end)
{
my $buf=1024*16;
read $fh, $buf, $end-$cur;
$cur+=1024*16;
}
close $fh;
And here is my access log is writing
127.0.0.1 - - [29/Dec/2015:11:39:31 +0530] "GET /cgi-bin/download.cgi HTTP/1.1" 200 484 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:43.0) Gecko/20100101 Firefox/43.0"
127.0.0.1 - - [29/Dec/2015:11:39:32 +0530] "GET /cgi-bin/download.cgi HTTP/1.1" 200 447 "-" "(null)"
As I checked what does this mean from apache site,here is what i got
If no content was returned to the client, this value will be "-". To log "0" for no content, use %B instead.
No content is returning to the client.It is returning null.
I am not able to figure out what is going wrong. Any help what will be grateful.
And please suggest me what should i do to play the video and I am not sure is this the correct way to do?
Thanks in advance
I see numerous major problems with the headers you are printing:
print "Content-Type: application/x-vlc-plugin \n";
This MIME type is primarily used in an <embed> tag to invoke VLC. The correct MIME type for this file type is probably video/mp4.
print "Cache-Control: public, must-revalidate, max-age=0";
print "Pragma: no-cache" ;
These headers, and a number of the other ones following, are missing terminal newlines (\n). This will cause them to run together, causing unexpected results.
print "Accept-Ranges: bytes";
Along with not having a newline, this header is telling the browser that this resource supports range requests. Your script doesn't actually implement this, though, which will cause browsers to get very confused.
print "Content-Length: $end - $begin\n\n";
Content-Length must be a single number representing the total length of the resource (e.g, Content-Length: $size). Also, you've got two newlines here, which will cause all the following headers to be treated as part of the content.
print "Content-Range: bytes $begin'-'$end'/'$size";
This header would normally be used with range requests, but you haven't fully implemented this feature, so this header will just confuse matters.
print "Content-Transfer-Encoding: binary";
This header is meaningless here — it's primarily used in email. Leave it out.
print "Connection: close";
This header will be set as needed by the web server. CGI scripts shouldn't generate it.
You're also missing the double newline that needs to follow the last header.
I am too lazy to get it fully working, Here is a few suggestions
which does not fit into comments
Definitely will move you forward.
# format is in the detail
my $content_length = $end - $begin;
print "Content-Type: text/html\r\n";
print "Content-Type: application/x-vlc-plugin\r\n";
print "Cache-Control: public, must-revalidate, max-age=0\r\n";
print "Pragma: no-cache\r\n" ;
print "Accept-Ranges: bytes\r\n";
print "Content-Length: $content_length", "\r\n";
print "Content-Range: bytes ${begin}-${end}/${size}\r\n";
print "Content-Disposition: inline; filename=\"$name$ext\"\r\n";
print "Content-Transfer-Encoding: binary\r\n";
print "Connection: close\r\n";
print "\r\n";
################################
#
# flush is needed
#
# ##############################
use IO::Handle;
STDOUT->autoflush;
my $cur=$begin;
seek($fh,$begin,0);
while(!eof($fh) && $cur < $end)
{
my $buf=1024*16;
read $fh, $buf, $end-$cur;
$cur+=1024*16;
##############################
# I suspect you will need this
#
###############################
print $buf;
}
close $fh;
I am using the Unixy Varnish plugin for cPanel and one particular website and all its subdomains use Full SSL + HTTP Strict Transport Security.
Nginx listens on a non-standard ssl port, passes the request to Varnish which by default strips all cookies. The request is then finally served up by Apache.
The website is mostly static html, with a WordPress subdomain, IPB installation, Piwik installation additionally.
The main domain is only static pages so I would like to force Varnish to cache it anyway since there isn't anything that involves logging in, then strip cookies excluding those belonging to Google Analytics.
Currently for Google Analytics I am using the script from http://www.ga-script.org, which uses the classical tracking code js. I intend to add the Universal Analytics code in addition, removing my UA-XXXXXXX id (Only from the classical js).
Then I will parse the Analytics cookie (as described here: http://www.dannytalk.com/read-google-analytics-cookie-script/), with the fix for Universal Analytics, in the latest comment on that post - so I can pass the resulting values to Piwik and/or a CRM system.
I'm not 100% clear on what I need to do to configure Varnish correctly for this kind of scenario and would appreciate others help with this.
Current Varnish config supplied by Unixy:
###################################################
# Copyright (c) UNIXY - http://www.unixy.net #
# The leading truly fully managed server provider #
###################################################
include "/etc/varnish/cpanel.backend.vcl";
include "/etc/varnish/backends.vcl";
# mod_security rules
include "/etc/varnish/security.vcl";
sub vcl_recv {
# Use the default backend for all other requests
set req.backend = default;
# Setup the different backends logic
include "/etc/varnish/acllogic.vcl";
# Allow a grace period for offering "stale" data in case backend lags
set req.grace = 5m;
remove req.http.X-Forwarded-For;
set req.http.X-Forwarded-For = client.ip;
# cPanel URLs
include "/etc/varnish/cpanel.url.vcl";
# Properly handle different encoding types
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|ico)$") {
# No point in compressing these
remove req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
# unkown algorithm
remove req.http.Accept-Encoding;
}
}
# Set up disabled
include "/etc/varnish/disabled.vcl";
# Exclude upgrade, install, server-status, etc
include "/etc/varnish/known.exclude.vcl";
# Set up exceptions
include "/etc/varnish/url.exclude.vcl";
# Set up exceptions
include "/etc/varnish/debugurl.exclude.vcl";
# Set up exceptions
include "/etc/varnish/vhost.exclude.vcl";
# Set up vhost+url exceptions
include "/etc/varnish/vhosturl.exclude.vcl";
# Set up cPanel reseller exceptions
include "/etc/varnish/reseller.exclude.vcl";
# Restart rule for bfile recv
include "/etc/varnish/bigfile.recv.vcl";
if (req.request == "PURGE") {
if (!client.ip ~ acl127_0_0_1) {error 405 "Not permitted";}
return (lookup);
}
## Default request checks
if (req.request != "GET" &&
req.request != "HEAD" &&
req.request != "PUT" &&
req.request != "POST" &&
req.request != "TRACE" &&
req.request != "OPTIONS" &&
req.request != "DELETE") {
return (pipe);
}
if (req.request != "GET" && req.request != "HEAD") {
return (pass);
}
## Modified from default to allow caching if cookies are set, but not http auth
if (req.http.Authorization) {
return (pass);
}
include "/etc/varnish/versioning.static.vcl";
## Remove has_js and Google Analytics cookies.
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", "");
set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
if (req.http.Cookie ~ "^\s*$") {
unset req.http.Cookie;
}
include "/etc/varnish/slashdot.recv.vcl";
# Cache things with these extensions
if (req.url ~ "\.(js|css|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|pdf)$" && ! (req.url ~ "\.(php)") ) {
unset req.http.Cookie;
return (lookup);
}
return (lookup);
}
sub vcl_fetch {
set beresp.ttl = 40s;
set beresp.http.Server = " - Web acceleration by http://www.unixy.net/varnish ";
# Turn off Varnish gzip processing
include "/etc/varnish/gzip.off.vcl";
# Grace to allow varnish to serve content if backend is lagged
set beresp.grace = 5m;
# Restart rule bfile for fetch
include "/etc/varnish/bigfile.fetch.vcl";
# These status codes should always pass through and never cache.
if (beresp.status == 503 || beresp.status == 500) {
set beresp.http.X-Cacheable = "NO: beresp.status";
set beresp.http.X-Cacheable-status = beresp.status;
return (hit_for_pass);
}
if (beresp.status == 404) {
set beresp.http.magicmarker = "1";
set beresp.http.X-Cacheable = "YES";
set beresp.ttl = 20s;
return (deliver);
}
/* Remove Expires from backend, it's not long enough */
unset beresp.http.expires;
if (req.url ~ "\.(js|css|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|pdf|ico)$" && ! (req.url ~ "\.(php)") ) {
unset beresp.http.set-cookie;
include "/etc/varnish/static.ttl.vcl";
}
include "/etc/varnish/slashdot.fetch.vcl";
else {
include "/etc/varnish/dynamic.ttl.vcl";
}
/* marker for vcl_deliver to reset Age: */
set beresp.http.magicmarker = "1";
# All tests passed, therefore item is cacheable
set beresp.http.X-Cacheable = "YES";
return (deliver);
}
sub vcl_deliver {
# From http://varnish-cache.org/wiki/VCLExampleLongerCaching
if (resp.http.magicmarker) {
/* Remove the magic marker */
unset resp.http.magicmarker;
/* By definition we have a fresh object */
set resp.http.age = "0";
}
set resp.http.Location = regsub(resp.http.Location, ":[0-9]+", "");
#add cache hit data
if (obj.hits > 0) {
#if hit add hit count
set resp.http.X-Cache = "HIT";
set resp.http.X-Cache-Hits = obj.hits;
}
else {
set resp.http.X-Cache = "MISS";
}
}
sub vcl_error {
if (obj.status == 503 && req.restarts < 5) {
set obj.http.X-Restarts = req.restarts;
return (restart);
}
}
# Added to let users force refresh
sub vcl_hit {
if (obj.ttl < 1s) {
return (pass);
}
if (req.http.Cache-Control ~ "no-cache") {
# Ignore requests via proxy caches, IE users and badly behaved crawlers
# like msnbot that send no-cache with every request.
if (! (req.http.Via || req.http.User-Agent ~ "bot|MSIE|HostTracker")) {
set obj.ttl = 0s;
return (restart);
}
}
return (deliver);
}
sub vcl_hash {
hash_data(req.http.cookie);
}
You can simply remove the GA cookie from the request, their are not used by your backend.
You can for example remove all cookie except for admin
if ( !( req.url ~ ^/admin/) ) {
unset req.http.Cookie;
}
Or discard all cookies that start with a underscore:
// Remove has_js and Google Analytics __* cookies.
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js)=[^;]*", "");
// Remove a ";" prefix, if present.
set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
https://www.varnish-cache.org/docs/4.0/users-guide/increasing-your-hitrate.html
I am trying to send an HTML e-mail using this code but all i am getting is FALSE from the mail() function.
The error_log is empty.
Can someone tell me why mail() is not working?
$message = '<html><body>';
$message .= '<table rules="all" style="border-color: #666;" cellpadding="10">';
$message .= "<tr style='background: #eee;'><td><strong>Name:</strong> </td><td>SDFSDF</td></tr>";
$message .= "<tr><td><strong>Email:</strong> </td><td>VXCVSDF</td></tr>";
$message .= "</table>";
$message .= "</body></html>";
$to = 'my_mail#gmail.com';
$subject = 'Website Change Reqest';
$headers = "From: USER NAME"."\r\n";
$headers .= "Reply-To: USER EMAIL"."\r\n";
$headers .= "MIME-Version: 1.0"."\r\n";
$headers .= "Content-Type: text/html; charset=ISO-8859-1"."\r\n";
if (mail($to, $subject, $message, $headers)) {
echo 'Your message has been sent.';
} else {
echo 'There was a problem sending the email.';
}
It's hard to debug PHP mail() function.
After checking your script, I can confirm that your code is working fine. It's something with your server or/and PHP configuration.
Start with this little snippet to see what is happening:
error_reporting(E_ALL);
ini_set('display_errors', -1);
echo '<br>I am : ' . `whoami`.'<br>';
$result = mail('myaddress#mydomain.com','This is the test','This is a test.');
echo '<hr>Result was: ' . ( $result === FALSE ? 'FALSE' : 'TRUE') . ' ('. $result. ')';
echo '<hr>';
echo phpinfo();
After output, check your sendmail_path, in most case sendmail_path uses sendmail MTA:
/usr/sbin/sendmail -t -i
Edit your php.ini file, set the following and don't forget to restart httpd server:
sendmail_path = /usr/sbin/sendmail -t -i
Check log files at /var/log/maillog, it could really help you to solve the problem.
If you still have a problem, just take a good look at PHPMailer, SwiftMailer, PEAR's Mail or Zend Framework's Zend_Mail an excellent, comprehensive, modern PHP mailing library. It will be easy to debug your problem after all.
You can using phpmailer for this way!
https://code.google.com/a/apache-extras.org/p/phpmailer/
Hope this help!