Apache sends script from cgi-bin directory as plain text - apache

This is on FreeBSD 11.3, with Apache 2.4 freshly installed via pkg.
As the minimal possible test of CGI, I'm trying to get the test-cgi script distributed as part of the Apache package for FreeBSD to run. (Serving static HTML pages works fine.)
I'm using the default directories, so the cgi-bin directory is /usr/local/www/apache24/cgi-bin. I have put a shebang for /bin/sh into the test-cgi file, and I have set the protection of the test-cgi file to 755. I can run the test-cgi file from the command line.
I have started Apache running.
I have checked that the config points the script alias directory to the right place:
ScriptAlias /cgi-bin/ "/usr/local/www/apache24/cgi-bin/"
I have checked the Directory block in the config for that directory and even tried adding some things (which didn't help):
<Directory "/usr/local/www/apache24/cgi-bin">
AllowOverride None
Options +ExecCGI
AddHandler cgi-script .cgi
Require all granted
</Directory>
When I put the URL for that script into my browser, I get the source file for the script as text, rather than the output from running the script. That source includes the shebang line I added, which confirms that the URL points to the file on the server that I expect it to.
I've read many articles, all of which suggest things I've checked and have correct. (I've been running Apache since I stopped running the NCSA web server, back in the day; but the last decade I haven't been doing clean installs on systems I have root on mostly, so I'm in danger of having out-of-date knowledge, sometimes worse than ignorance.)
No errors are logged when this happens. In the access log, I get a 304 error followed by a 200:
192.168.1.14 - - [15/Feb/2022:17:45:42 -0600] "GET /cgi-bin/test-cgi HTTP/1.1" 304 -
192.168.1.14 - - [15/Feb/2022:18:00:24 -0600] "GET /cgi-bin/test-cgi HTTP/1.1" 200 1269
That ought to mean something useful, I would think? The 304 is a "not modified" status, which should be returned on a conditional request. This is probably somehow relevant? If I update the file timestamp it doesn't change this, though.
This has got to be something really simple, perhaps even stupid, that I'm overlooking. Would somebody please point out what precise stupid thing it is? Thanks!!

Got it! In the default config the LoadModule commands for cgi_module and cgid_module were commented out.
Uncommenting them (only one is actually loaded due to surrounding <ifModule> code) got the CGI working normally.

Related

CGI script in Apache 2.4 runs but returns empty response?

In moving a website from older webservers running Apache 2.2 to newer webservers running Apache 2.4 I encountered a weird problem with CGI. Basically no CGI scripts work on the new webservers. They return 500 errors. However in the ScriptLog there is no "%error" section and the "%response" is empty. Scripts appear to be running but returning absolutely nothing! Since nothing implies no header the result is a 500 error.
The mod_cgi module is loaded (confirmed by running "apachectl -M"). We are using a prefork MPM so this is the correct module.
Most of the CGI scripts are Perl but we also have one which is compiled C which shows exactly the same pattern of behavior. Even a basic test script like this does not work:
#!/usr/bin/perl
print STDOUT "Content-type: text/html\n\n";
print STDOUT "Hello, World.";
I temporarily assigned a shell to the "apache" user, switched to that user, and was able to run several of these scripts. Not all produce meaningful output when run that way but they do run. Yes, /usr/bin/perl does exist, is the only copy of Perl on the system, and perl-CGI is installed.
All of these scripts are on an NFS share which is used by both the old and new webservers. The old webservers can still serve up these scripts as CGI with no problems. So in case it wasn't already clear the issue here is not with the CGI scripts themselves. It is a configuration problem with the new webservers.
The NFS share is mounted at /mnt/cgi/ with subdirectories for each user. There are sections in a file included in our Apache config which look like this:
Alias /cgi-bin/usera /mnt/cgi/usera
<Directory /mnt/cgi/usera>
Options +ExecCGI
AddHandler cgi-script .cgi .pl
Require all granted
</Directory>
A script in this directory would be accessed at http://server.example.com/cgi-bin/usera/first.pl . When I connect to this page this is appended to the log file specified in ServerLog (with the correct IP addresses... I xxx-ed those out):
%% [Fri Nov 11 12:00:00 2016] GET /cgi-bin/usera/first.pl HTTP/1.1
%% 500 /mnt/cgi/usera/first.pl
%request
Host: xxx.xxx.xxx.xxx
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
Via: 1.1 xxx.xxx.xxx.xxx:80
X-Forwarded-For: xxx.xxx.xxx.xxx
X-Forwarded-For-Port: 51380
%response
The permissions on these scripts are all 755, so that's not the problem. If I remove the AddHandler line from the Directory definition for the script directory then I can download the script, so Apache is definitely able to access them.
The new servers are RHEL7. SELinux is in Permissive mode, not Enforcing. The booleans httpd_enable_cgi and httpd_use_nfs are both "on" anyway.
Among the things which I have tried which do not help are:
Setting "ScriptAlias /cgi-bin/usera /mnt/cgi/usera" instead of just "Alias".
Setting "ScriptAlias /cgi-bin/ /mnt/cgi". In general we don't want to use ScriptAlias anyway since there are also text data files in some of those directories.
Making the scripts owner:group apache:apache.
Adding "AllowOverride None" to the Directory block.
Adding "use CGI::Carp 'fatalsToBrowser';" to a script. This returns nothing. Again, I think the scripts are running fine but the output is not being received by Apache somehow.
I should also add that in general the new webservers work fine. PHP-based webapps run just fine on them, and of course static content is no problem.
So that's a lot of detail but in the end the issue is this: How can Apache be executing CGI scripts but getting no output at all from them? Any thoughts?
Sure enough not too long after I asked this question I found the answer. Basically, we had this line in the config file for this site:
RLimitMEM 2000000 3000000
This limits the memory of processes to 2MB (soft) and 3MB (hard). It is also far too little for CGI scripts. 50MB/80MB worked. We set it even higher just in case.
Here are a few references to people having similar problems for the benefit of those of you who found this page via Google:
PHP out of memory error even though memory_limit not reached
http://www.wrensoft.com/forum/zoom-search-engine-v3-v4-v5-old-versions/29-php-cgi-script-returns-no-results-on-apache-rlimitmem-rlimitcpu
If your perl script is
#!/usr/bin/perl
print STDOUT "Content-type: text/html\n\n";
print STDOUT "Hello, World.";
Please try removing "STDOUT" from it.

Unable to set php_value 'soap.wsdl_cache_dir'

I have VPS server (CentOS 6.5) running Apache 2.2.4 and PHP-FPM (FastCGI Process Manager). Looking in php-fpm error_log I've noticed error with every spawn php-fpm child process:
WARNING: [pool www] child 24086 said into stderr: "ERROR: Unable to set php_value 'soap.wsdl_cache_dir'"
I couldn't find any info on this warning googling. Is anybody aware what does this mean and how to get rid of this warning?
UPDATE 1:
fastcgi.conf for apache:
User apache
Group apache
LoadModule fastcgi_module modules/mod_fastcgi.so
<IfModule mod_fastcgi.c>
DirectoryIndex index.php index.html index.shtml index.cgi
AddHandler php5-fcgi .php
# For monitoring status with e.g. Munin
<LocationMatch "/(ping|status)">
SetHandler php5-fcgi-virt
Action php5-fcgi-virt /php5-fcgi virtual
</LocationMatch>
Action php5-fcgi /php5-fcgi
Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi
FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization
</IfModule>
# global FastCgiConfig can be overridden by FastCgiServer options in vhost config
FastCgiConfig -idle-timeout 20 -maxClassProcesses 1
And here is the php-fpm.conf and pool configuration for php:
pid = /var/run/php-fpm/php-fpm.pid
daemonize = yes
; Start a new pool named 'www'.
[www]
listen = /tmp/php5-fpm.sock
group = apache
pm = dynamic
pm.max_children = 8
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.status_path = /status
ping.path = /ping
catch_workers_output = yes
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_value[session.save_handler] = files
php_value[session.save_path] = /var/lib/php/session
php_value[soap.wsdl_cache_dir] = /var/lib/php/wsdlcache
Everything else is on defaults.
UPDATE 2:
After manually creating /var/lib/php/wsdlcache directory as suggested and setting permissions to 770 and owner to root:apache, I hoped that I won't see the error again, but unfortunately after restarting php-fpm process the error is there again and this becomes something really very strange.
P.S. Maybe this question is more appropriate for serverfault, but generally there are more experts in php and apache configuration on stackoverflow.
I hate so trivial solutions. Finally I've found the problem and solution by myself. Leaving it here for reference for others with some pre-history.
FastCGI configuration files were taken from internet when first configuring FastCGI as I haven't used it before. Tutorials showing FastCGI configuration contained the line php_value[soap.wsdl_cache_dir] = /var/lib/php/wsdlcache. I became really interested what is SOAP as I don't use it on the websites that I run on this server and this curiosity brought me the solution. Actually I don't need SOAP and simply removing that line would fix the problem I guess, but I've decided to leave it there and found out that I needed simply to install php-soap.
yum install php-soap
For RHEL/CentOS
After restarting php-fpm I don't get the error on respawning fpm processes.
You're getting that message if the directory /var/lib/php/wsdlcache specified in your pool configuration doesn't exist and cannot created by the PHP worker either. Note that the PHP worker is not running as root, but as user apache (which is great for security and should be kept that way!), therefore it most likely doesn't have write permissions in /var/lib. Kepp also in mind that workers can be chrooted (your config doesn't look like you're doing it, but one can) - in that case, the directory has, of course, be inside the chroot jail.
Create that directory and modifiy the access rights so that apacheis able to read and write into it and everything should be fine.
Pretty sure you can't use php_value with (fast) CGI. You might want to look at user.ini files if using a version of PHP newer than 5.3.0 and needing PHP_INI_PERDIR ini settings.
Since PHP 5.3.0, PHP includes support for configuration INI files on a
per-directory basis. These files are processed only by the CGI/FastCGI
SAPI. This functionality obsoletes the PECL htscanner extension. If
you are using Apache, use .htaccess files for the same effect.
UPDATE: Didn't see it was pool www. As Johannes H. observes: "You can use php_value inside the pool-cofiguration of php-fpm...". My original answer only really applies for per directory tweaks. See Johannes comment below.

Apache not allowing access to sub directories

I installed Apache2, php, and mysql onto my Linux Mint machine with the hopes of continuing a website I had built. After copy and pasting all of the code I had I noticed a problem with one of my include statments:
<?php include("./dir/file1.html");
That wasn't working. Originally I thought the issue was with php but after a lot of trial and error I've concluded it's apache not allowing access to subdirectories in the /var/www/ directory.
Since I'm new to editing apache configuration files, I'm not really sure what to change to allow access to all subdirectories within /var/www/ on localhost. I've tried adding:
<Directory /var/www/*>
order allow,deny
allow from all
</Directory>
to my httpd.conf file (which was blank, which I learned had something to do with Linux Mint being Debian based) and confirmed that default in /sites-available had similar code. I'll post that if it's requested.
I'm unsure on what else I can do to get apache to allow access to subdirectories in my /var/www/ directory for localhost and none of my previous methods have worked.
UPDATE:
I believe it's an Apache issue because when trying to go to a subdirectory through the browser (like localhost/dir/), I get a 403 error. I don't have to be going to an actual webpage for that problem. Also, include statments including files in the current directory has no problem, only with subdirectories.
The Include statement above gives no errors or any other useful messages. Whatever the include statement is including is just not there. I've tried require but that gives me a 500 server error: the server may be down for maintenance (paraphrased).

Broken links in local Wordpress site

I have a wordpress site set up on a live server, and I have replicated the site locally by following these steps:
FTPed live files to local
Set up virtual host (dev.domain.com) to point at local version of site
Imported the db locally
changed wp-config.php to the correct local db settings
changed 'home' and siteurl' in db.wp_options to point to http://dev.domain.com (from http://www.domain.com)
Home page loads fine, /wp-admin all loads fine.
Problem is in links to pages:
Permalinks are set to point to post name: http://dev.example.com/sample-post/, just as on live server. However, locally, all links to posts are broken, and Apache (2.2.17) is responding with the following error: "The requested URL /sample-post/ was not found on this server."
I'm assuming I've missed a configuration step somewhere, though I've followed this process umpteen times in the past with no problems. The issue with this particular site is that the theme has been hacked with lots and lots of absolute paths entered, meaning setting up a dev site has required loads of code changes.
I'm not really sure how to further trouble shoot this, not completely understanding how Wordpress / Apache handles permalinks
Copy the .htaccess if you haven't already
I think that might be the problem
OK - sorted this, it was to do with mod_rewrite on apache.
To fix (this is for my install of Ubuntu 11.04):
first enable mod_rewrite in apache
sudo a2enmod rewrite
Then edit the relevant file in /etc/apache2/sites-available (could be 'default', or one specific to site):
sudo vi /etc/apache2/sites-available/site-file
Change AllowOverride directive for your site document root from None to All:
:
<Directory /var/www/site.com/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
That seems to have done it.

403 error on .exe files apache

I have an apache webserver running on centos environment. There is a folder and in that there is a file which has an extension .exe lets name the file x.exe
when I try download this file using http://mysite.com/folder/x.exe I get a 403 error.
but if I add a gif to that folder it works http://mysite.com/folder/pic.gif
I dont have SSH access to this server but need to know some clue for why this is happenning, the file permissions are correct too.
any help is appreciated
Within Apache's httpd.conf, it is possible to specify default handling actions for certain file types or paths. It may be that your server is configured to block executable files all together. Similar blocking can also occur in an .htaccess file. There are a few ways to do it... here's one:
<Files ~ "\.exe$">
Order allow,deny
Deny from all
</Files>
That little snippet could be in the main .conf file, and included .conf file, OR an .htaccess file (or all three!), and again, that is just one possibility. Your best bet is to check out the server logs. They will indicate why a given request was denied in a form similar to this:
[Wed Oct 11 14:32:52 2000] [error] [client 127.0.0.1] client denied by
server configuration: /www/root
Take a look at this document for information about server logs (including default paths to the logs themselves).
As I mentioned, there are a few other ways to block access to certain file types, certain files, certain folders, etc. Without looking at the error logs, it is very difficult to determine the cause. Further, without full access to the server, it may not be possible to alter this behavior. This blockage could be in place as a matter of policy for your web host.
I'd like to add I spent like 2 hours trying this crap over and over again only to discover that selinux was denying specific file types for httpd.
try:
setenforce Permissive
and see if that corrects the error
tag
Fedora 16
well the answer was I had this in a folder where it forbids the exe
Deny from all
<FilesMatch "\.(html|HTML|htm|HTM|xhtml|XHTML|js|JS|css|CSS|bmp|BMP|png|PNG|gif|GIF|jpg|JPG|jpeg|JPEG|ico|ICO|pcx|PCX|tif|TIF|tiff|TIFF|au|AU|mid|MID|midi|MIDI|mpa|MPA|mp3|MP3|ogg|OGG|m4a|M4A|ra|RA|wma|WMA|wav|WAV|cda|CDA|avi|AVI|mpg|MPG|mpeg|MPEG|asf|ASF|wmv|WMV|m4v|M4V|mov|MOV|mkv|MKV|mp4|MP4|swf|SWF|flv|FLV|ram|RAM|rm|RM|doc|DOC|docx|DOCX|txt|TXT|rtf|RTF|xls|XLS|xlsx|XLSX|pages|PAGES|ppt|PPT|pptx|PPTX|pps|PPS|csv|CSV|cab|CAB|arj|ARJ|tar|TAR|zip|ZIP|zipx|ZIPX|sit|SIT|sitx|SITX|gz|GZ|tgz|TGZ|bz2|BZ2|ace|ACE|arc|ARC|pkg|PKG|dmg|DMG|hqx|HQX|jar|JAR|xml|XML|pdf|PDF)$">
Allow from all
</FilesMatch>
added exe there and worked fine,
also a note, this was in a SilverStripe CMS powered site, and in the assets folder of SilverStripe