Perl CGI Output Buffering Using XAMPP Apache Server - apache

I recently picked up a CGI Programming with Perl book and while trying to test one of the example problems I ran into an issue. According to the book newer versions of Apache (since v1.3) do not buffer the output of CGI scripts by default, but when I run the script below, it waits until the entire loop completes before it prints anything:
# count.cgi
#!"C:\xampp\perl\bin\perl.exe" -wT
use strict;
#print "$ENV{SERVER_PROTOCOL} 200 OK\n";
#print "Server: $ENV{SERVER_SOFTWARE}\n";
print "Content-type: text/plain\n\n";
print "OK, starting time consuming process ... \n";
# Tell Perl not to buffer our output
$| = 1;
for ( my $loop = 1; $loop <= 30; $loop++ ) {
print "Iteration: $loop\n";
## Perform some time consuming task here ##
sleep 1;
}
print "All Done!\n";
The book said that using an older version of Apache you may need to run the script as an "nph" script so the output would not be buffered, but I even tried that with no luck.
# nph-count.cgi
#!"C:\xampp\perl\bin\perl.exe" -wT
use strict;
print "$ENV{SERVER_PROTOCOL} 200 OK\n";
print "Server: $ENV{SERVER_SOFTWARE}\n";
print "Content-type: text/plain\n\n";
print "OK, starting time consuming process ... \n";
# Tell Perl not to buffer our output
$| = 1;
for ( my $loop = 1; $loop <= 30; $loop++ ) {
print "Iteration: $loop\n";
## Perform some time consuming task here ##
sleep 1;
}
print "All Done!\n";
I am running: Apache/2.4.10 (Win32) OpenSSL/1.0.1i PHP/5.5.15
Clearly this version of Apache is beyond v1.3 so what is going on here? I did a little research and found that if you have "mod_deflate" or "mod_gzip" enabled it can cause output to be buffered, but I checked my configuration files and "mod_deflate" and "mod_gzip" are already both disabled. All of the other posts I have seen about buffering refer to PHP and say to modify "php.ini", but I am using Perl, not PHP, so that doesn't seem to be the solution.
Also I don't know if this helps at all but I am using Chrome as my web browser.
How can I stop Apache from buffering my output? Thanks!

Try disabling 'mod_deflate'.
Simply move/delete it from your mods-enabled directory.
Don't forget to restart apache after doing so.

Related

Perl Apache script runs from browser-perfroms as expected closes a running perl instance but when trying to launch a new perl instance it does nothing

I have a server running Perl and an Apache web server.
I wrote a script which closes running instances of perl.exe and then launches them again, with some system() commands.
When I try and run it from a browser it works as expected, closes all running perl.exe, but then it doesn't restart them with my system("start my_script.pl").
This is my script running from the browser.
#!/Perl/bin/perl
use lib "/Perl/ta/mods" ;
# http://my_domain.com/cgi-bin/myscript.pl?
use CGI::Carp qw(fatalsToBrowser);
use IPC::System::Simple qw(system capture);
use Win32::Process::List;
my $script_to_end = "start \path_to_script\myscript.pl" ;
system($script_to_end);
print "done" ;
exit;
This launching myscript.pl which does the following:
#!/Perl/bin/perl
use strict;
use warnings;
use lib "/Perl/ta/mods" ;
use Win32::Process::List;
my $script = 'io.socket' ;
my #port = (4005,5004) ;
my $scriptpath_4005 = "Perl C:\\path_to_script\\$script.pl $port[0]";
my $scriptpath_5004 = "Perl C:\\path_to_script\\$script.pl $port[1]";
our $nmealogger = "C:\\nmealogger\\nmealogger.exe";
system('TASKKILL /F /IM nmealogger* /T 2>nul');
print "current running perl instance: $$\n" ;
my $P = Win32::Process::List->new(); #constructor
my %list = $P->GetProcesses(); #returns the hashes with PID and process name
foreach my $key ( keys %list ) {
unless ($list{$key} eq 'perl.exe') { next ; }
# $active_perl_pid{$key} = 1 ;
print sprintf("%30s has PID %15s", $list{$key}, $key) . "\n\n";
if ($$ == $key)
{
print "current running perl instance: $key\n";
next;
} else {
print "kill: $key\n";
system("TASKKILL /F /PID $key");
# system('TASKKILL /F /IM powershell* /T 2>nul');
}
}
system "start $nmealogger" ;
system "start $scriptpath_4005";
system "start $scriptpath_5004";
use common_cms;
exit;
This works fine if I run it from the machine, kills all perl.exe and re-launches perl.exe, but running from the browser it only kills them but never re-launches them.
I thought it could be to do with the httpd.conf settings but I'm not sure.
Any help would be greatly appreciated.
Thanks
Update: I couldn't get around this issue so took a different approach.
Ended up changing the script running from the browser to write a log file on the server and created a scheduled task that runs every minute to check if that file exists, which then kicks of my script on the server.
Quite a long way around but hey it works.
Thanks for the suggestions, much appreciated.
"start" runs the script in a new command window, correct? Presumably Apache is a service, please see related https://stackoverflow.com/a/36843039/2812012.

Apache server configuration with the right perl version

So basically my apache server is not using the same perl version as what I have in the os environment and that's why some modules did not get to be used properly. When I checked my error-log, it showed that my intended module cannot be located and it is pointing at /users/local/perl6.
When I do perl -v in the terminal, it says that this is perl 5, version 16. What's the best way to reset my apache server to use the right version of perl?
The right way is to configure PATH env variable right. To see what current env variables are use:
#!/usr/bin/env perl
print "Content-Type: text/plain\n";
print "\n\n";
use Data::Dumper qw/ pp /;
print pp \%ENV;
How to setup env variables from apache config
For example if you want to install specific version of perl for apache you setup it into /home/www/perl directory (Here we use www user to run apache sever) and set PATH:
SetEnv PATH /home/www/perl/bin
Do not override current PATH value if you require it.
If you want to see what perl is used - run:
#!/usr/bin/env perl
print "Content-Type: text/plain\n";
print "\n\n";
print `which perl`
The particular OS running Apache might have some bearing on the best way. However, assuming no mod_perl, one way that I have used on *nix systems and Windows is to include the entire path to the desired interpreter in the shebang line of the script. That is, the first line of the script should start with "#!" followed by the desired interpreter. For your server, it should be:
#!/path/to/Perl5/perl

Interrupt (SIGINT) perl script from perl script running under apache

I have a perl script running as root that monitors a serial device and sends commands to it. Under apache, I have another perl script that displays a gui for the controlling 'root' script.
I'm trying to interrupt via sigint and sigusr1 the root perl script from the gui perl script but get operation not permitted, probably as one is root the other is not.
I basically want the gui to be able to tell the controlling root script to pass some command to the serial device.
If I run the gui script from the cmd line as root it can successfully signal the root script
I'm not sure where to go from here, any suggestions on methods to interrupt a root script when not running as root? calling seperate "signal" script as shown:
#cmds = ("perl /var/www/signal.cgi", "$perlSerialPID", "test");
if (0 == system(#cmds)) {
# code if the command runs ok
print "system call ok: $?";
} else {
# code if the command fails
print "not ok $?";
}
# try backticks
$output = `perl /var/www/signal.cgi $perlSerialPID test`;
print "<br>:::$output";
signal.cgi:
#!/usr/bin/perl
$pid = $ARGV[0];
$type = $ARGV[1];
if($type eq "test"){
if(kill 0, $pid) {
print "perl-serial $pid running!";
}else{
print "perl-serial $pid gone $!";
}
}elsif($type eq "USR1"){
if(kill USR1, $pid) {
print "perl-serial interrupted";
}else{
print "perl-serial interrupt failed $!";
}
}else{
print "FAILED";
}
Use named pipes for IPC.
The downside is that you'll have to retrofit your main script to read from the named pipe as event driven process.
Similar in concept to above (with the same downside), but use sockets for communication.
Create a separate "send a signal" script that's called via a system call from your UI, and make that new script SUID and owned by root - in which case it will execute with root's permissions.
Downside: this is ripe for security abuse so harden this very carefully.

Running a process in background in Perl

Update: Edited to make question more understandable.
I am creating a script which automatically parses an HTTP upload file and stores the information of the uploaded file like name, time of upload to another data file. These information are found in mod_security log file. mod_security has a rule in which we can redirect the uploaded file to a Perl script. In my case the Perl script is upload.pl. In this perl script I will scan the uploaded file using ClamAV antivirus. But the mod_sec only logs the uploaded file information like name, time of upload after the Perl script upload.pl is executed. But I am initiating another perl script execute.pl from upload.pl with a sleep(10) in execute.pl. The intention is that the execute.pl starts its function only after the completion of upload.pl. I need execute.pl to be executed as background process and upload.pl should complete without waiting the output of execute.pl.
But my issue is even I have made the execute.pl to run in background the HTTP upload waits for the completion of execute.pl even I have made the process to execute in background. I need the upload.pl to get complete without waiting the output of execute.pl. The script runs fine in console. For example I execute perl upload.pl from console the upload.pl completely executed without waiting the output of execute.pl. But when I try the same through apache, that means when I upload a sample file, the upload stucks for both the upload.pl and execute.pl to complete. Since execute.pl has been called from upload.pl as background process , the upload process should complete without waiting the output of execute.pl.
The methods I have tried so far are
system("cmd &")
my $pid = fork();
if (defined($pid) && $pid==0) {
# background process
my $exit_code = system( $command );
exit $exit_code >> 8;
}
my $pid = fork();
if (defined($pid) && $pid==0) {
exec( $command );
}
Rephrase of the question:
How do I start a perl deamon process with a perl webscript?
Answer:
The key is to close the streams of the background job, since they are shared:
Webscript:
#!/usr/bin/perl
#print html header
print <<HTML;
Content-Type: text/html
<!doctype html public "-//W3C//DTD HTML 4.01 Transitional//EN">
<html><head><title>Webscript</title>
<meta http-equiv="refresh" content="2; url=http://<your host>/webscript.pl" />
</head><body><pre>
HTML
#get the ps headers
print `ps -ef | grep STIME | grep -v grep`;
#get the running backgroundjobs
print `ps -ef | grep background.pl | grep -v grep`;
#find any running process...
$output = `cat /tmp/background.txt`;
#start the background job if there is no output yet
`<path of the background job>/background.pl &` unless $output;
#print the output file
print "\n\n---\n\n$output</pre></body></html>";
Background job:
#!/usr/bin/perl
#close the shared streams!
close STDIN;
close STDOUT;
close STDERR;
#do something usefull!
for ( $i = 0; $i < 20; $i++ ) {
open FILE, '>>/tmp/background.txt';
printf FILE "Still running! (%2d seconds)\n", $i;
close FILE;
#getting tired of all the hard work...
sleep 1;
}
#done
open FILE, '>>/tmp/background.txt';
printf FILE "Done! (%2d seconds)\n", $i;
close FILE;

jruby IO.popen read hangs in action when requests are made to the local dev server

I just want to send the output of wkhtmltopdf to the user. It shouldn't be so hard.
def it
send_pdf "file.pdf"
end
def send_pdf(file)
url= url_for(params) # Example: http://localhost:3000/report/it
webkit= Rails.root.join('app', 'bin', 'wkhtmltopdf', 'current')
cmd= "#{webkit} -q \"#{url_for(params)}\" -"
data= IO.popen(cmd).read ############### HANGS HERE ###################
send_data(data, type: "application/pdf", filename: file)
end
Why does it hang and how to fix it?
I think the clue here may be it's a local development server - so maybe it can only accept one request at a time.
To test, try getting the html from somewhere else:
def send_pdf(file)
# [...]
cmd= "#{webkit} -q http://brighterplanet.com -"
# [...]
end
If that works, then the answer to your question is that the development server is "single-threaded".