Apache directory listing as json - apache

Is it possible to have the directory listing in apache return json instead of html?
I'm completely unexperienced with Apache, but I've browsed the documentation for IndexOptions and mod_autoindex. It seems like there's no built in way to configure the output.

I looked at the code in apache source in modules/generators/mod_autoindex.c and the HTML generation is static. You could rewrite this to output JSON, simply search for all the ap_rputs and ap_rvputs function calls and replace the HTML with the appropriate JSON. That's seems like a lot of work though.
I think I would do this instead...
In the Apache configuration for this site, change to...
DirectoryIndex ls_json.php index.php index.html
And then place ls_json.php script into the any directory for which you want a JSON encoded listing:
// grab the files
$files = scandir(dirname(__FILE__));
// remove "." and ".." (and anything else you might not want)
$output = [];
foreach ($files as $file)
if (!in_array($file, [".", ".."]))
$output[] = $file;
// out we go
header("Content-type: application/json");
echo json_encode($output);

You could use mod_dir as follows - create a php script and list your directories how you want (set content-type as appropriate).

Related

Processing paths that start with a dot using Mason dhandler

How can I make a Mason dhandler process a URL whose path section starts with .?
For example, if I have a dhandler file in my web root, the dhandler is triggered if I navigate to
`http://www.example.com/hello`
but I get a 404 if I navigate to http://www.example.com/.hello.
I am using Mason in combination with Apache and I have verified that this is not an Apache configuration issue forbidding paths that start with dot.
You're probably mean HTML::Mason and not the new(er) Mason.
While I haven't installed Apache, but it is simple to create an PSGI test-case using HTML::Mason::PSGIHandler, like such app.psgi
use 5.014;
use warnings;
use HTML::Mason::PSGIHandler;
my $h = HTML::Mason::PSGIHandler->new(
comp_root => $ENV{HOME}.'/tmp/mas/comps',
);
my $app = sub {
my $env = shift;
$h->handle_psgi($env);
};
and an very simple dhandler
<pre>
=<% $m->dhandler_arg %>=
</pre>
after running plackup and pointing my browser to http://localhost:5000/.hello it shows
so, the HTML::Mason hasn't any limitations on processing paths with dots.
If you need more help, edit your question and add relevant parts of your apache config, htaccess, and your handlers how do you invoke the HTML::Mason.

Use one instance of Yourls URL shortner with multiple domains

I've been looking for a way to use Yourls with multiple domains, the main issue is that when configuring Yourls you need to supply the domain name in the config.php file (YOURLS_SITE constant).
Configuring just one domain with actual multiple domains pointing to Yourls causes unexpected behavior.
I've looked around and couldn't find a quick hack for this
I would use this line in config.php...
define('YOURLS_SITE', 'http://' . $_SERVER['HTTP_HOST'] . '');
(note to add any /subdirectory or whatever if that applies)
then as long as your apache hosts config is correct, any domain or subdomain pointing at this directory will work. keep in mind though, any redirect will work with any domain, so domain.one/redirect == domain.two/redirect
I found this quick-and-dirty solution and thought it might be useful for someone.
in the config.php file I changed the constant definition to be based on the value of $_SERVER['HTTP_HOST'], this works for me because I have a proxy before the server that sets this header, you can also define virtual hosts on your Apache server and it should work the same (perhaps you will need to use $_SERVER['SERVER_NAME'] instead).
so in config.php I changed:
define( 'YOURLS_SITE', 'http://domain1.com');
to
if (strpos($_SERVER['HTTP_HOST'],'domain2.com') !== false) {
define( 'YOURLS_SITE', 'http://domain2.com/YourlsDir');
/* domain2 doesn't use HTTPS */
$_SERVER['HTTPS']='off';
} else {
define( 'YOURLS_SITE', 'https://domain1/YourlsDir');
/* domain1 always uses HTTPS */
$_SERVER['HTTPS']='on';
}
Note1: if Yourls is located in the html root you can remove /YourlsDir from the URL
Note2: The url in YOURLS_SITE must not end with /
Hopefully this will help anyone else

RewriteLock hangs Apache on re-start when added to an otherwise working Rewrite / Rewritemap

I am on a Network Solutions VPS, four domain names share the IP. I have a Rewrite / RewriteMap set up that works. The Rewrite is in the file for the example.com web address at var/www/vhosts/example.com/conf/vhost.conf, the Rewrite being the only thing in the vhost.conf file. It would not work in the main httpd.conf file for the server.
The RewriteMap uses a couple things in the URL typed in by the user (http://example.com/bb/cc) to get a third piece of info (aa) from the matching database record, uses that third piece of info as the query string to load a file, and leaves the originally typed in URL in the address bar while showing the file based on the query string aa.
Here is the Rewrite:
Options +FollowSymlinks
RewriteEngine on
RewriteMap newurl "prg://var/www/cgi-bin/examplemap.php"
RewriteRule ^/(Example/.*) ${newurl:$1} [L]
When I add the following either above or below the RewriteMap line:
RewriteLock /var/lock/mapexamplelock
and try to re-start Apache, it hangs and Apache will not re-start. I have tried different file paths (thinking it might be a permissions issue and just hoping it worked of course), taking away the initial /, putting it in quotes, different file types (ie. .txt at the end), different file names, just about anything, and every time it hangs Apache on re-start. The Rewrite / RewriteMap works without it, but I have read a lot on the importance of the RewriteLock, and php is issuing warnings in the log ending in DANGEROUS not to use RewriteLock.
Here is the map (located where the Rewrite says):
#!/usr/bin/php
<?php
include '/pathtodatabase';
set_time_limit(0);
$keyboard = fopen("php://stdin","r");
while (1) {
$line = fgets($keyboard);
if (preg_match('/(.*)\/(.*)/', $line, $igot)) {
$getalias = mysql_query("select aa FROM `table`.`dbase` WHERE bb = '$igot[1]' && cc = '$igot[2]'");
while($row=mysql_fetch_array($getalias)) {
$arid = $row['aa'];
}
print "/file-to-take-load.php?aa=$arid\n";
}
else {
print "$line\n";
}
}
?>
I looked in the main httpd.conf file and there is nothing I can find about RewriteLock that might be interfering. It's just the standard one that came in the set-up of the VPS.
If anyone has an idea about why this would work only without RewriteLock and the possible fix, it would be greatly appreciated.
Thanks Greg
Apache hangs if you define more than one RewriteLock directives or if you use it in a VHOST config.
The RewriteLock should be specified at server config level and ONLY ONCE. This lock file will be used by all prg type maps. So if you want to use multiple prg maps, I suggest using an internal locking mechanism, for example in PHP there is the flock function, and simply ignore the warning apache writes in the error log.
See here for more info:
http://books.google.com/books?id=HUpTYMf8-aEC&lpg=PP1&pg=PA298#v=onepage&q&f=false

Can I set up variables for use in an apache configuration file?

I have a php webapp convention that I'd like to follow and a lot of setup for it is done in the apache configuration file.
Is there any way I can configure some strings that contain paths for multiple uses throughout the configuration file?
A dim example might be:
EnginePath = /opt/engine
AppPath = /opt/anapp
DocumentRoot [AppPath]/Public
CustomLog [AppPath]/Logs/Access.log combined
php_admin_value auto_prepend_file [EnginePath]/EngineBootstrap.php
As you can see, I have lots of opportunities to consolodate several repeted occurences of the system and app paths. This would make it easier to keep the configuration files as generic as possible, requiring a change in app or engine location to be edited once. Rather than several times per configuration file.
Thanks for any help!
You can use Define
http://httpd.apache.org/docs/2.4/mod/core.html#define
or mod_macro
https://httpd.apache.org/docs/trunk/en/mod/mod_macro.html
As far as I know, this is not possible in apache configuration files.
However, what may be possible is for you to pre-process your httpd.conf file. I've used this technique with other configuration files, and it should work. For example, if you use php:
Save your httpd.conf.php
<?php
$EnginePath = '/opt/engine';
$AppPath = '/opt/anapp';
?>
DocumentRoot <?php echo $EnginePath; ?>/Public
CustomLog <?php echo $AppPath; ?>/Logs/Access.log combined
php_admin_value auto_prepend_file <?php echo $EnginePath; ?>/EngineBootstrap.php
When you want to change your config, call:
php httpd.conf.php > httpd.conf
This means you have to regenerate your conf file every time you want to make a change, but that can also be automated via some quick shell scripting.

Manual alternative to mod_deflate

Say I don't have mod_deflate compiled into apache, and I don't feel like recompiling right now. What are the downsides to a manual approach, e.g. something like:
AddEncoding x-gzip .gz
RewriteCond %{HTTP_ACCEPT_ENCODING} gzip
RewriteRule ^/css/styles.css$ /css/styles.css.gz
(Note: I'm aware that the specifics of that RewriteCond need to be tweaked slightly)
Another alternative would be to forward everything to a PHP script, which gzips and caches everything on the fly. On every request, it would compare timestamps with the cached version and return that if it's newer than the source file. With PHP, you can also overwrite the HTTP Headers, so it is treated properly as if it was GZIPed by Apache itself.
Something like this might do the job for you:
.htaccess
RewriteEngine On
RewriteRule ^(css/styles.css)$ cache.php?file=$1 [L]
cache.php:
<?php
// Convert path to a local file path (may need to be tweaked)
cache($_GET['file']);
// Return cached or raw file (autodetect)
function cache($file)
{
// Regenerate cache if the source file is newer
if (!is_file($file.'.gz') or filemtime($file.'.gz') < filemtime($file)) {
write_cache($file);
}
// If the client supports GZIP, send compressed data
if (!empty($_SERVER['HTTP_ACCEPT_ENCODING']) and strpos($_SERVER['HTTP_ACCEPT_ENCODING'], 'gzip') !== false) {
header('Content-Encoding: gzip');
readfile($file.'.gz');
} else { // Fallback to static file
readfile($file);
}
exit;
}
// Saved GZIPed version of the file
function write_cache($file)
{
copy($file, 'compress.zlib://'.$file.'.gz');
}
You will need write permissions for apache to generate the cached versions. You can modify the script slightly to store cached files in a different place.
This hasn't been extensively tested and it might need to be modified slightly for your needs, but the idea is all there and should be enough to get you started.
There doesn't seem to be a big performance difference between the manual and automatic approaches. I did some apache-bench runs with automatic and manual compression and both times were within 4% of each other.
The obvious downside is that you'll have to manually compress the CSS files before deploying. The other thing you might want to make very sure is that you've got the configurations right. I couldn't get wget to auto-decode the css when I tried the manual approach and ab reports also listed the compressed data size instead of uncompressed ones as with automatic compression.
You could also use mod_ext_filter and pipe things through gzip. In fact, it's one of the examples:
# mod_ext_filter directive to define the external filter
ExtFilterDefine gzip mode=output cmd=/bin/gzip
<Location /gzipped>
# core directive to cause the gzip filter to be
# run on output
SetOutputFilter gzip
# mod_header directive to add
# "Content-Encoding: gzip" header field
Header set Content-Encoding gzip
</Location>
The advantage of this is that it's really, really easy… The disadvantage is that there will be an additional fork() and exec() on each request, which will obviously have a small impact on performance.