Running a process in background in Perl - apache

Update: Edited to make question more understandable.
I am creating a script which automatically parses an HTTP upload file and stores the information of the uploaded file like name, time of upload to another data file. These information are found in mod_security log file. mod_security has a rule in which we can redirect the uploaded file to a Perl script. In my case the Perl script is upload.pl. In this perl script I will scan the uploaded file using ClamAV antivirus. But the mod_sec only logs the uploaded file information like name, time of upload after the Perl script upload.pl is executed. But I am initiating another perl script execute.pl from upload.pl with a sleep(10) in execute.pl. The intention is that the execute.pl starts its function only after the completion of upload.pl. I need execute.pl to be executed as background process and upload.pl should complete without waiting the output of execute.pl.
But my issue is even I have made the execute.pl to run in background the HTTP upload waits for the completion of execute.pl even I have made the process to execute in background. I need the upload.pl to get complete without waiting the output of execute.pl. The script runs fine in console. For example I execute perl upload.pl from console the upload.pl completely executed without waiting the output of execute.pl. But when I try the same through apache, that means when I upload a sample file, the upload stucks for both the upload.pl and execute.pl to complete. Since execute.pl has been called from upload.pl as background process , the upload process should complete without waiting the output of execute.pl.
The methods I have tried so far are
system("cmd &")
my $pid = fork();
if (defined($pid) && $pid==0) {
# background process
my $exit_code = system( $command );
exit $exit_code >> 8;
}
my $pid = fork();
if (defined($pid) && $pid==0) {
exec( $command );
}

Rephrase of the question:
How do I start a perl deamon process with a perl webscript?
Answer:
The key is to close the streams of the background job, since they are shared:
Webscript:
#!/usr/bin/perl
#print html header
print <<HTML;
Content-Type: text/html
<!doctype html public "-//W3C//DTD HTML 4.01 Transitional//EN">
<html><head><title>Webscript</title>
<meta http-equiv="refresh" content="2; url=http://<your host>/webscript.pl" />
</head><body><pre>
HTML
#get the ps headers
print `ps -ef | grep STIME | grep -v grep`;
#get the running backgroundjobs
print `ps -ef | grep background.pl | grep -v grep`;
#find any running process...
$output = `cat /tmp/background.txt`;
#start the background job if there is no output yet
`<path of the background job>/background.pl &` unless $output;
#print the output file
print "\n\n---\n\n$output</pre></body></html>";
Background job:
#!/usr/bin/perl
#close the shared streams!
close STDIN;
close STDOUT;
close STDERR;
#do something usefull!
for ( $i = 0; $i < 20; $i++ ) {
open FILE, '>>/tmp/background.txt';
printf FILE "Still running! (%2d seconds)\n", $i;
close FILE;
#getting tired of all the hard work...
sleep 1;
}
#done
open FILE, '>>/tmp/background.txt';
printf FILE "Done! (%2d seconds)\n", $i;
close FILE;

Related

How to create htaccess file correctly?

I am trying to run a script via apache on a shared linux server, like
#!/usr/bin/perl
print "Content-type: text/html\n\n";
foreach $i (keys %ENV) {
print "$i $ENV{$i}\n";
}
But I want to run it via. a symlink created like this
ln -s printenv.pl linkedprintenv.pl
It runs fine directly but I got a 500 server error when executing via the symlink from a web browser. I understand the solution may be to create a .htaccess file containing
Options +FollowSymLinks
but I tried and that didn't work. Is there extra configuration needed for that single line htaccess file to take effect?

Perl Apache script runs from browser-perfroms as expected closes a running perl instance but when trying to launch a new perl instance it does nothing

I have a server running Perl and an Apache web server.
I wrote a script which closes running instances of perl.exe and then launches them again, with some system() commands.
When I try and run it from a browser it works as expected, closes all running perl.exe, but then it doesn't restart them with my system("start my_script.pl").
This is my script running from the browser.
#!/Perl/bin/perl
use lib "/Perl/ta/mods" ;
# http://my_domain.com/cgi-bin/myscript.pl?
use CGI::Carp qw(fatalsToBrowser);
use IPC::System::Simple qw(system capture);
use Win32::Process::List;
my $script_to_end = "start \path_to_script\myscript.pl" ;
system($script_to_end);
print "done" ;
exit;
This launching myscript.pl which does the following:
#!/Perl/bin/perl
use strict;
use warnings;
use lib "/Perl/ta/mods" ;
use Win32::Process::List;
my $script = 'io.socket' ;
my #port = (4005,5004) ;
my $scriptpath_4005 = "Perl C:\\path_to_script\\$script.pl $port[0]";
my $scriptpath_5004 = "Perl C:\\path_to_script\\$script.pl $port[1]";
our $nmealogger = "C:\\nmealogger\\nmealogger.exe";
system('TASKKILL /F /IM nmealogger* /T 2>nul');
print "current running perl instance: $$\n" ;
my $P = Win32::Process::List->new(); #constructor
my %list = $P->GetProcesses(); #returns the hashes with PID and process name
foreach my $key ( keys %list ) {
unless ($list{$key} eq 'perl.exe') { next ; }
# $active_perl_pid{$key} = 1 ;
print sprintf("%30s has PID %15s", $list{$key}, $key) . "\n\n";
if ($$ == $key)
{
print "current running perl instance: $key\n";
next;
} else {
print "kill: $key\n";
system("TASKKILL /F /PID $key");
# system('TASKKILL /F /IM powershell* /T 2>nul');
}
}
system "start $nmealogger" ;
system "start $scriptpath_4005";
system "start $scriptpath_5004";
use common_cms;
exit;
This works fine if I run it from the machine, kills all perl.exe and re-launches perl.exe, but running from the browser it only kills them but never re-launches them.
I thought it could be to do with the httpd.conf settings but I'm not sure.
Any help would be greatly appreciated.
Thanks
Update: I couldn't get around this issue so took a different approach.
Ended up changing the script running from the browser to write a log file on the server and created a scheduled task that runs every minute to check if that file exists, which then kicks of my script on the server.
Quite a long way around but hey it works.
Thanks for the suggestions, much appreciated.
"start" runs the script in a new command window, correct? Presumably Apache is a service, please see related https://stackoverflow.com/a/36843039/2812012.

What's in teamcity custom_script.cmd

I'm trying to dig into the depths of teamcity to get a better understanding of what its doing under the hood(and improve my own build knowledge). I noticed that when I run a build step it then executes its own .cmd which I presume wraps up the msbuild scripts. The problem is that whenever I look in the directory specified the file doesn't exist as I'm guessing it creates, executes then deletes almost instantly. Any suggestions on how to access the file? or what's inside?
Starting:D:\TeamCity\buildAgent\temp\agentTmp\custom_script5990675507156014131.cmd
A temporary file is created by TeamCity when you run add a Command Line Build Step with "Custom script" as runner.
The content of this file would be the Custom script you specified inside the user interface.
The produced output would be:
Step 1/1: Command Line (1s)
Starting: D:\TeamCity\buildAgent\temp\agentTmp\custom_script2362934300799611461.cmd
in directory: D:\TeamCity\buildAgent\work\c72dca7a7355b5de
Hello World
Process exited with code 0
In case anyone is wondering about this still, you can force echo back on.
Put as the first thing in the custom script
#echo on
this will undo the silent commands teamcity defaults to.
I looked around for a while but there seems to be no configuration variable in TeamCity allowing to keep generated files. Now if the commands executed take some time, e.g. more than a couple of seconds, you could just open the temp directory in explorer and start hitting F5 (refresh) from the moment a build is started until you see the .cmd file appear, then be quick and right-click it and select 'Edit' to open it in a text editor. If that is too hard you can try with the solution presented here: create a Powershell script with code like this:
$watcher = New-Object System.IO.FileSystemWatcher
$watcher.Path = "D:\TeamCity\buildAgent\temp\agentTmp"
$watcher.Filter = "*.cmd"
$watcher.IncludeSubdirectories = $false
$watcher.EnableRaisingEvents = $true
$action = { $path = $Event.SourceEventArgs.FullPath
Add-content "D:\log.txt" -value (Get-Content $path)
}
Register-ObjectEvent $watcher "Created" -Action $action
Register-ObjectEvent $watcher "Changed" -Action $action
while ($true) {sleep 1}
and run it. When the build starts and creates a cmd file, the powershell script will copy the content to d:\log.txt. This will still not work for very short-lived scripts though. In that case I'd just make the script last longer by adding something like
ping 127.0.0.1 -n 5 -w 1000 > NUL
which will make it last at least 5 seconds.

Perl CGI Output Buffering Using XAMPP Apache Server

I recently picked up a CGI Programming with Perl book and while trying to test one of the example problems I ran into an issue. According to the book newer versions of Apache (since v1.3) do not buffer the output of CGI scripts by default, but when I run the script below, it waits until the entire loop completes before it prints anything:
# count.cgi
#!"C:\xampp\perl\bin\perl.exe" -wT
use strict;
#print "$ENV{SERVER_PROTOCOL} 200 OK\n";
#print "Server: $ENV{SERVER_SOFTWARE}\n";
print "Content-type: text/plain\n\n";
print "OK, starting time consuming process ... \n";
# Tell Perl not to buffer our output
$| = 1;
for ( my $loop = 1; $loop <= 30; $loop++ ) {
print "Iteration: $loop\n";
## Perform some time consuming task here ##
sleep 1;
}
print "All Done!\n";
The book said that using an older version of Apache you may need to run the script as an "nph" script so the output would not be buffered, but I even tried that with no luck.
# nph-count.cgi
#!"C:\xampp\perl\bin\perl.exe" -wT
use strict;
print "$ENV{SERVER_PROTOCOL} 200 OK\n";
print "Server: $ENV{SERVER_SOFTWARE}\n";
print "Content-type: text/plain\n\n";
print "OK, starting time consuming process ... \n";
# Tell Perl not to buffer our output
$| = 1;
for ( my $loop = 1; $loop <= 30; $loop++ ) {
print "Iteration: $loop\n";
## Perform some time consuming task here ##
sleep 1;
}
print "All Done!\n";
I am running: Apache/2.4.10 (Win32) OpenSSL/1.0.1i PHP/5.5.15
Clearly this version of Apache is beyond v1.3 so what is going on here? I did a little research and found that if you have "mod_deflate" or "mod_gzip" enabled it can cause output to be buffered, but I checked my configuration files and "mod_deflate" and "mod_gzip" are already both disabled. All of the other posts I have seen about buffering refer to PHP and say to modify "php.ini", but I am using Perl, not PHP, so that doesn't seem to be the solution.
Also I don't know if this helps at all but I am using Chrome as my web browser.
How can I stop Apache from buffering my output? Thanks!
Try disabling 'mod_deflate'.
Simply move/delete it from your mods-enabled directory.
Don't forget to restart apache after doing so.

Interrupt (SIGINT) perl script from perl script running under apache

I have a perl script running as root that monitors a serial device and sends commands to it. Under apache, I have another perl script that displays a gui for the controlling 'root' script.
I'm trying to interrupt via sigint and sigusr1 the root perl script from the gui perl script but get operation not permitted, probably as one is root the other is not.
I basically want the gui to be able to tell the controlling root script to pass some command to the serial device.
If I run the gui script from the cmd line as root it can successfully signal the root script
I'm not sure where to go from here, any suggestions on methods to interrupt a root script when not running as root? calling seperate "signal" script as shown:
#cmds = ("perl /var/www/signal.cgi", "$perlSerialPID", "test");
if (0 == system(#cmds)) {
# code if the command runs ok
print "system call ok: $?";
} else {
# code if the command fails
print "not ok $?";
}
# try backticks
$output = `perl /var/www/signal.cgi $perlSerialPID test`;
print "<br>:::$output";
signal.cgi:
#!/usr/bin/perl
$pid = $ARGV[0];
$type = $ARGV[1];
if($type eq "test"){
if(kill 0, $pid) {
print "perl-serial $pid running!";
}else{
print "perl-serial $pid gone $!";
}
}elsif($type eq "USR1"){
if(kill USR1, $pid) {
print "perl-serial interrupted";
}else{
print "perl-serial interrupt failed $!";
}
}else{
print "FAILED";
}
Use named pipes for IPC.
The downside is that you'll have to retrofit your main script to read from the named pipe as event driven process.
Similar in concept to above (with the same downside), but use sockets for communication.
Create a separate "send a signal" script that's called via a system call from your UI, and make that new script SUID and owned by root - in which case it will execute with root's permissions.
Downside: this is ripe for security abuse so harden this very carefully.