"End of script output before headers" error in Apache - apache

Apache on Windows gives me the following error when I try to access my Perl script:
Server error!
The server encountered an internal error and was unable to complete your request.
Error message:
End of script output before headers: sample.pl
If you think this is a server error, please contact the webmaster.
Error 500
localhost
Apache/2.4.4 (Win32) OpenSSL/1.0.1e PHP/5.5.3
this is my sample script
#!"C:\xampp\perl\bin\perl.exe"
print "Hello World";
but not working on browser

Check file permissions.
I had exactly the same error on a Linux machine with the wrong permissions set.
chmod 755 myfile.pl
solved the problem.

If this is a CGI script for the web, then you must output your header:
#!"C:\xampp\perl\bin\perl.exe"
print "Content-Type: text/html\n\n";
print "Hello World";
The following error message tells you this End of script output before headers: sample.pl
Or even better, use the CGI module to output the header:
#!"C:\xampp\perl\bin\perl.exe"
use strict;
use warnings;
use CGI;
print CGI::header();
print "Hello World";

For future reference:
This is typically an error that occurs when you are unable to view or execute the file, the reason for which is generally a permissions error. I would start by following #Renning 's suggestion and running chmod 755 test.cgi (obviously replace test.cgi with the name of your cgi script here).
If that doesn't work there are a couple other things you can try. I once got this error when I created test.cgi as root in another user's home. The fix there was to run chmod user:user test.cgi where user is the name of the user who's home you're in.
The last thing I can think of is making sure that your cgi script is returning the proper headers. In my ruby script I did it by putting puts "Content-type: text/html" before I actually outputted anything to the page.
Happy coding!

Probably this is an SELinux block. Try this:
# setsebool -P httpd_enable_cgi 1
# chcon -R -t httpd_sys_script_exec_t cgi-bin/your_script.cgi

Had the same error on raspberry-pi. I fixed it by adding -w to the shebang
#!/usr/bin/perl -w

You may be getting this error if you are executing CGI files out of a home directory using Apache's mod_userdir and the user's public_html directory is not group-owned by that user's primary GID.
I have been unable to find any documentation on this, but this was the solution I stumbled upon to some failing CGI scripts. I know it sounds really bizarre (it doesn't make any sense to me either), but it did work for me, so hopefully this will be useful to someone else as well.

Since no answer is accepted, I would like to provide one possible solution. If your script is written on Windows and uploaded to a Linux server(through FTP), then the problem will raise usually. The reason is that Windows uses CRLF to end each line while Linux uses LF. So you should convert it from CRLF to LF with the help of an editor, such Atom, as following

If using Suexec, ensure that the script and its directory are owned by the same user you specified in suexec.
In addition, ensure that the user running the cgi script has permissions execute permissions to the file AND the program specified in the shebang.
For example if my cgi script starts with
#! /usr/bin/cgirunner
Then the user needs permissions to execute /usr/bin/cgirunner.

Internal error is due to a HIDDEN character at end of shebang line !!
ie line #!/usr/bin/perl
By adding - or -w at end moves the character away from "perl" allowing the path to the perl processor to be found and script to execute.
HIDDEN character is created by the editor used to create the script

So for everyone starting out with XAMPP cgi
change the extension from pl to cgi
change the permissions to 755
mv test.pl test.cgi
chmod 755 test.cgi
It fixed mine as well.

In my case I had a similar problem but with c ++ this in windows 10, the problem was solved by adding the environment variables (path) windows, the folder of the c ++ libraries, in my case I used the codeblock libraries:
C:\codeblocks\MinGW\bin

This is my case.
Only two line in the script:
#!/usr/bin/sh
echo "Content-type: text/plain"
give the error 500.
adding this line, after the first echo:
echo ""
don't give the error.

Basing above suggestions from all, I was using xampp for running cgi scripts.
Windows 8 it worked with out any changes, but Cent7.0 it was throwing errors like this as said above
AH01215: (2)No such file or directory: exec of '/opt/lampp/cgi-bin/pbsa_config.cgi' failed: /opt/lampp/cgi-bin/pbsa_config.cgi, referer: http://<>/MCB_HTML/TestBed.html
[Wed Aug 30 09:11:03.796584 2017] [cgi:error] [pid 32051] [client XX:60624] End of script output before headers: pbsa_config.cgi, referer: http://xx/MCB_HTML/TestBed.html
Try:
Disabled selinux
Given full permissions for script, but 755 will be ok
I finaly added like -w like below
#!/usr/bin/perl -w*
use CGI ':standard';
{
print header(),
...
end_html();
}
-w indictes enable all warnings.It started working, No idea why -w here.

Related

Run exec()/system() etc command using PHP & OpenBSD

I am trying to run a simple command say ls -l on OpenBSD shell (uname -r: 6.4) using php 5.6.
<?php
$output = shell_exec('ls -l');
echo "<pre>$output</pre>";
?>
There is no output of above code. Just pre tag upon inspecting elements
So what is causing this issue? I tried using the same command using
System
Shell_exec
exec
No luck. What would be the cause of this ? Probably System/shell_exec not supported in OpenBSD's version of Php or something else.
Thanks in advance!
You haven't given enough information for a definitive answer, but my
guess is that you run php through php-fpm, which is by default chrooted
to /var/www. Since shell_exec and system first call /bin/sh and you
most likely didn't copy it to var/www/bin/sh it can't find your shell.
After that you'd also need to copy the binaries (in this case ls) to
your chroot and possible library dependencies (not needed for files
under /bin).
Hope this helps for illustrative purposes, but please don't use it in
production.

scp fails with "protocol error: filename does not match request"

I have a script that uses SCP to pull a file from a remote Linux host on AWS. After running the same code nightly for about 6 months without issue, it started failing today with protocol error: filename does not match request. I reproduced the issue on some simpler filenames below:
$ scp -i $IDENT $HOST_AND_DIR/"foobar" .
# the file is copied successfully
$ scp -i $IDENT $HOST_AND_DIR/"'foobar'" .
protocol error: filename does not match request
# used to work, i swear...
$ scp -i $IDENT $HOST_AND_DIR/"'foobarbaz'" .
scp: /home/user_redacted/foobarbaz: No such file or directory
# less surprising...
The reason for my single quotes was that I was grabbing a file with spaces in the name originally. To deal with the spaces, I had done $HOST_AND_DIR/"'foo bar'" for many months, but starting today, it would only accept $HOST_AND_DIR/"foo\ bar". So, my issue is fixed, but I'm still curious about what's going on.
I Googled the error message, but I don't see any real mentions of it, which surprises me.
Both hosts involved have OpenSSL 1.0.2g in the output of ssh -v localhost, and bash --version says GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
Any ideas?
I ended up having a look through the source code and found the commit where this error is thrown:
GitHub Commit
remote->local directory copies satisfy the wildcard specified by the
user.
This checking provides some protection against a malicious server
sending unexpected filenames, but it comes at a risk of rejecting
wanted files due to differences between client and server wildcard
expansion rules.
For this reason, this also adds a new -T flag to disable the check.
They have added a new flag -T that will ignore this new check they've added so it is backwards compatible. However, I suppose we should look and find out why the filenames we're using are flagged as restricted.
In my case, I had [] characters in the filename that needed to be escaped using one of the options listed here. for example:
scp USERNAME#IP_ADDR:"/tmp/foo\[bar\].txt" /tmp

cgi-bin returning internal server error due to compilation failure

I moved my script to a new server with almost identical configuration (apache/centos) but the cgi-bin has been failing to work ever since. For past one week I have googled every possible solution and isolated the error by executing script from command line. Output i get is as follows for a simple test file:
[root /var/foo/public_html/cgi-bin]# perl -dd /var/foo/public_html/cgi-bin/test.cgi
Loading DB routines from perl5db.pl version 1.32
Editor support available.
Enter h or `h h' for help, or `man perldebug' for more help.
main::(/var/foo/public_html/cgi-bin/test.cgi:2):
2: print "Content-type: text/plain\n\n";
Unknown error
Compilation failed in require at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63.
at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63
Term::ReadLine::Perl::new('Term::ReadLine', 'perldb', 'GLOB(0x18ac160)', 'GLOB(0x182ce68)') called at /usr/share/perl5/perl5db.pl line 6073
DB::setterm called at /usr/share/perl5/perl5db.pl line 2237
DB::DB called at /var/foo/public_html/cgi-bin/test.cgi line 2
Attempt to reload Term/ReadLine/readline.pm aborted.
Compilation failed in require at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63.
END failed--call queue aborted at /var/foo/public_html/cgi-bin/test.cgi line 63.
at /var/foo/public_html/cgi-bin/test.cgi line 63
[root /var/foo/public_html/cgi-bin]#
The code of the test file I am using is:
#!/usr/local/bin/perl
print "Content-type: text/plain\n\n";
print "testing...\n";
I have checked the path to perl, perl version etc etc and everything seems to be ok. However the script is not exceuting and gives a 500 internal server error. I am running php5 with dso handler and susEXEC on. susEXEC logs does not say anything except that the cgi script has been called. This problem is completely baffling me and my little experience with cgi/perl is not helping. Can anyone point me in a right direction to solve this?
As someone commented already check the permissions and also try to run the file from the console.
A likely problem is that when you switched servers the path to perl changed and your shebang line is wrong. A common technique to avoid this is to use #!/usr/bin/env perl instead.
When you recieve a 500 error apache should also log something in the error log (your vhost config might define a custom error log instead the default one so check that if you're having trouble finding the error message.
Also there is no reason to run your script under the Perl Debugger, unless your goal is to experiment with the Perl Debugger (and with no variable defined it is pointless as an example). My advice is don't use the Perl Debugger. A lot of experienced Perl programmers, (probably a big majority) never or rarely use it.
I solved this eventually, posting it for the sake of posterity:
On moving server I mounted the filesystem on a different partition as the home partition ran out of memory. I forgot to give exec permissions to the new mountpoint, that's why the cgi scripts were not executing.

Cron Job - Could not open input file:

I have set up a php file to run that just echos hello.
<?php
echo hello;
?>
My cron job looks like this:
/usr/local/bin/php -f “/home/username/public_html/mls/test.php”
when my script runs i get a confirmation email that says:
Could not open input file: /home/username/public_html/mls/test.php
I don't know what is causing this. I am using godaddy's virtual private server with cpanel x installed. I have used the ssh to set permissions 777 on folder and file and still can not get it to run.
Any advice would be helpful. Thanks.
For some reason PHP cannot open the file. Try replacing /usr/local/bin/php -f with "ls -la" to try to crib some more information. Remember to NOT quote the file name in the crontab: php -f filename.php, not php -f "filename.php", unless it contains spaces -- and then it's better to use single quotes.
Possibly, try "ls -la /home", "ls -la /home/username", "ls -la ~/public_html" and so on.
Also try appending
2>&1
to the command line, in case only stdout is mailed to you (I don't really think so, but being sure costs little).
One other possibility
The crontab as it is refers /home/username/public_html/mls/test.php - that is, a public HTML directory inside username's commonest value for a home directory.
It is possible that the cron job is either not running with the appropriate user and privileges, or that the user it "sees" is actually a virtual user - there is no "/home/username" at all - and the "home directory" is elsewhere, possibly even existing just as long as the cron job runs. In this case the solution might be to refer to
~/public_html/mls/test.php
or, as described above, to first run a command such as pwd or ls -la to determine exactly where the cron job's current working directory is.
If this, too, fails, then another workaround could be to invoke the PHP HTTP handler via curl or lynx:
/usr/bin/curl http://www.thishostname.com/mls/test.php
Possibly using either some environment variable or curl header or _GET option to authenticate to the script as the cron job, and avoid it being accessible from the outside.

Postfix piping email to php, permissions error

I'm attempting to pipe an email to PHP with my Postfix mail server, using the technique mentioned here and have encountered the following error...
Mar 16 22:52:52 s15438530 postfix/pipe[9259]: AD1632E84C63: to=<php#[myserver].com>, relay=plesk_virtual, delay=0.61, delays=0.59/0/0/0.02, dsn=4.3.0, status=deferred (temporary failure. Command output: /bin/sh: /var/www/vhosts/[myserver].com/httpdocs/clients/emailpipe/email2php.php: Permission denied 4.2.1 Message can not be delivered at this time )
I'd really appreciate if anyone could shed some light on this issue for me. I've tried 777'ing the emailpipe directory, to no avail. Where am I going wrong?
Many thanks.
From the postfix docs...
For security reasons, deliveries to command and file destinations are performed with the rights of the alias database owner. A default userid, default_privs, is used for deliveries to commands/files in root-owned aliases.
So you have two options, either set the default_privs in main.cf to match the ownership of the email2php file.
Alternatively, there should be a way to create an alias database that is owned by the user instead of postfix/nobody. I haven't tried this before though so can't advise.
I have fixed this issue by disabling the SELINUX.
Make sure that you have
#!/usr/bin/php
<?php
(or whatever your path to php is - do "which php" on the server)
at the top of each of your php scripts and that each of the php script files is executable
chmod +x /var/.../email2php.php
Also, make sure that you can test the script from the command line:
cat some_rfc822_email.txt | /var/.../email2php.php
and get the result that you want
To fix this issue, you'll want to chown or chmod /var/www/vhosts/[myserver].com/httpdocs/clients/emailpipe/email2php.php to executable by your postfix user. Alternately, you'll want to redefine this user to execute the file successfully.
Simply changing the permissions of your directory (unless you used -R) won't be sufficient.
To illustrate why this works, consider the following toy example:
<me>#harley:~$ touch test
<me>#harley:~$ ls -al test
-rw-r--r-- 1 <me> <me> 0 2012-03-26 23:44 test
<me>#harley:~$ sh test
<me>#harley:~$
<me>#harley:~$ ./test
bash: ./test: Permission denied
<me>#harley:~$ chmod 755 test
<me>#harley:~$ ./test
<me>#harley:~$
In order to execute a file directly through the running shell, it needs to be set as executable. Other invocations (for example, sh email2php.php or php email2php.php) only require read access, because they're chaining execution off a different file entirely.
For what's likely to be causing the issue in the first place, see here.