The structure of my project is
/var/www/mysite
------pages
------scripts
------other
In it's corresponding virtual host the configuration is:
ServerName mysitename.com
DocumentRoot /var/www/mysite/pages/
Because of that, when serving the index.php from the pages folder, the resources(scripts/images etc ) are not found, as expected, because the page is trying to load them like so. http://mysitename.com/scripts/storage.js.
This of course makes sense.
How would you approach solving this? I am aware that by setting some mod_rewrite rules you can conditionally rewrite urls, is that a way to go about it? Im mainly interested in seeing what my options are here rather than getting one solution like, move your .html file up one layer.
There are 2 possible solutions:
Using alias in your Apache server config (of course you need to have control over Apache config). An example of alias command:
Alias /scripts /var/www/mysite/scripts
<Directory /var/www/mysite/scripts>
Options Indexes
Allow from all
</Directory>
Create a symbolic link inside pages/ directory. For example on *nix systems use this command to create symlinks:
cd pages
ln -s ../scripts .
ln -s ../other .
Related
I want to proxy a bunch of images on my Apache server so that they are not stored in the webroot.
Specifically, I have all my images in the following folder on my Linux server:
/var/www/img/
However, I want it so that when a user goes to mydomain.com/img/img1.jpg (which has the server path /var/www/html/img/img1.jpg), it references the following file outside of the webroot:
/var/www/img/img1.jpg
It seems like this is possible using the ProxyPass and ProxyPassReverse rules in an .htaccess file (source: https://httpd.apache.org/docs/2.4/rewrite/avoid.html#proxy), but I'm having trouble understanding their syntax and which path goes where, etc.
Given my above situation, could someone please provide some explicit code that I can write into an .htaccess file to achieve what I want?
Edit: I just solved this problem by adding the following one line to my Apache httpd.conf file, and then restarting the server:
Alias "/img" "/var/www/img"
Where the /img part refers to the img directory in my webroot, and the /var/www/img part refers to the Linux filesystem directory I want to point to with the actual files in it.
Best way is to add a symbolic link to your other folder:
ln -s /my/target/folder /var/www/html/mynewfolder
If you can edit the Apache conf file for the server you need to add the FollowSymLinks directive in the directory block:
<Directory "/var/www/html/">
AllowOverride All
Options FollowSymLinks
</Directory>
You might also be able to add that to your .htaccess file as Options +FollowSymLinks if you can't edit the Apache file
You can try doing this with the PassThrough PT flag and mod_rewrite.
You create an alias to the actual path and then use it in the rule.
Alias "/img" "/var/www/img/"
RewriteRule "img/(.+)\.(jpe?g|gif|png)$" "/img/$1.$2" [PT]
See how that works for you.
Suppose we have the /home/example.org/public_html/ directory on the filesystem, which serves as the document root of my virtualhost.
The relevant httpd configuration for that vhost would look like this:
<VirtualHost *:80>
ServerName example.org:80
...
DocumentRoot /home/example.org/public_html
<Directory /home/example.org/public_html>
AllowOverride All
...
</Directory>
...
</VirtualHost>
In order to prevent the htaccess lookups on the filesystem without losing the htaccess functionality – at least at the DocumentRoot level- I transformed the configuration to the following:
<VirtualHost *:80>
ServerName example.org:80
...
DocumentRoot /home/example.org/public_html
<Directory /home/example.org/public_html>
AllowOverride None
Include /home/example.org/public_html/.htaccess
...
</Directory>
...
</VirtualHost>
Difference
AllowOverride None
Include /home/example.org/public_html/.htaccess
Let’s see what we have accomplished with this:
httpd does not waste any time looking for and parsing htaccess files
resulting in faster request processing
Questions:
Using Include directive, Apache load htaccess only on service start or for each request?
If point 1 it's true, how do refresh apache conf without httpd.exe -k restart?
Firstly, note that checking for .htaccess is commonly not all that big an issue, since the relevant bits of the disk are cached in memory. It becomes an issue where for example you have a very large number of directories under your web root directory or directories, and the hits are scattered amongst them so that the hit rate on cached disk blocks is low. You might be better dealing with that by disabling .htaccess selectively for directory trees where it creates a problem. Parsing the .htaccess directives creates a little CPU load of course, but CPU should generally not be your server's bottleneck.
Answering your question as posed though; Yes, you will need to run a command as root to load the new configuration. Rather than using restart though, use reload or (better) graceful.
httpd.exe -k graceful
You could (but probably shouldn't) write a cron job to periodically check whether this needs to be run. Without a lot of testing, I think something like this should work, run as a regular root cron job:
#!/bin/bash
[ /var/run/httpd/http.pid -nt /home/example.org/public_html/.htaccess ] \
&& httpd.exe -k graceful
This creates a bit of disk load itself of course. This load doesn't increase with traffic volume, but might be an issue if you have many such included files.
SECURITY WARNING: It sounds like you are setting up a situation where a non root user is likely to be able to get Apache to Include directives at will. This is much more powerful than what can be done with a .htaccess file, and amounts to a root exploit. E.g. it gives access to things like the User and LoadModule directives, which .htaccess directives can never do.
I recommend that you should put Included directives in a file inside your Apache configuration directory, and have it accessible only by root. There are other ways to make sure that only root can edit the .htaccess file, but getting these files out of the user-owned area makes it less likely you'll inadvertently open access again later.
While the .htaccess mechanism does incur extra disk load, it is the mechanism that's designed for use by non-root users. It would be nice to have a mechanism for untrusted users to modify configuration with a limit on how often the .htaccess file would be checked for, but if it exists, I don't know it.
Apache accesses and processes the htaccess files on each request. This is why one does not need to restart the server every time to check their current configurations.
You do need to restart the server/service for testing any changes made to apache.conf, httpd.conf or the vhost configurations.
Quoting from Apache's tutorial on htaccess file:
You should avoid using .htaccess files completely if you have access
to httpd main server config file. Using .htaccess files slows down
your Apache http server. Any directive that you can include in a
.htaccess file is better set in a Directory block, as it will have
the same effect with better performance.
Since you already are trying to Include the htaccess from inside a <Directory> module block, the performance would be better if you include everything from the file to this block itself instead. There is, although no difference; apart from having to maintain configurations in two places simultaneously.
The htaccess file will get processed just once, at the time of server start.
I have a question about the Apache's Directory directive, here is what they say in the docs (http://httpd.apache.org/docs/2.4/mod/core.html#directory):
Note that the default access for <Directory "/"> is to permit all access. This means that Apache httpd will serve any file mapped from an URL. It is recommended that you change this with a block such as
<Directory "/">
Require all denied
</Directory>
But how will Apache do what they say (serve any file mapped from an URL) if I have only DocumentRoot set up, e.g.:
DocumentRoot "/usr/local/apache/htdocs"
No Alias "/some/webspace/path" "/", UserDir or other URL mapping rules which map to the root / directory of the system?
In another section of the docs (Security Tips http://httpd.apache.org/docs/2.4/misc/security_tips.html#protectserverfiles), they make a half-full example of UserDir as I can guess:
One aspect of Apache which is occasionally misunderstood is the
feature of default access. That is, unless you take steps to change
it, if the server can find its way to a file through normal URL
mapping rules, it can serve it to clients.
For instance, consider the following example:
# cd /; ln -s / public_html
Accessing http://localhost/~root/
This would allow clients to walk through the entire filesystem. To work around this, add the following block to your server's configuration:
<Directory "/">
Require all denied
</Directory>
This will forbid default access to filesystem locations.
Is what they say about the Directory directive at http://httpd.apache.org/docs/2.4/mod/core.html#directory just a warning in the case you use modules like mod_userdir like they then show at http://httpd.apache.org/docs/2.4/misc/security_tips.html#protectserverfiles? Or is there something else, maybe a little detail about Directory not given in the doc?
Thanks for the attention!
In Apache, there are unlimited hypothetical plugins/directives that could change only how a URL is mapped to the filesystem. Obvious/mainstream ones are DocumentRoot, Alias, AliasMatch, RewriteRule, UserDir etc. But there's no telling what other ones might exist.
Apache simply separates the URI to filesystem mapping completely from whether the core is willing to actually serve from that filesystem location.
There are a few ways you can accidentally expose things, like with mod_rewrite, and the default configuration file protects you from this with the defaults on <Directory />. The manual is not good at always emphasizing the differences in the compiled-in defaults and the contents of the default conf. The latter can change when repackaged which is tricky.
I have a domain, for example, http://example.com. It is already configured to point to
/var/www/
Basically, i want http://example.com to point to
/var/www/4.0/
and http://example.com/foobar/ to point to
/var/www/moo/
How can I do this with the httpd.conf file for Apache2? Thanks
Assuming you are only serving one domain (example.com), you can change your DocumentRoot to /var/www/4.0/
and set an Alias for the /foobar like
Alias /foobar /var/www/moo
If you are serving more than one domain from the same Apache, then you need to use the DocumentRoot within a VirtualHost tag.
More info is here: http://httpd.apache.org/docs/2.2/vhosts/
I think you're going about this the wrong way with httpd.conf, but I'll answer your question as you asked it first and then explain about that.
There are two settings in httpd.conf relevant to this.
The DocumentRoot setting is the important one, it configures the base directory from which to serve. Change it as so:
Before:
DocumentRoot "/var/www"
After:
DocumentRoot "/var/www/4.0"
Be sure not to use any / after the 4.0, it's not needed.
A little under 30 lines below this setting is another, which should say:
As the comment above it says, change it to "/var/www/4.0" too.
This would set www.example.com to the 4.0 directory (first part) and apply the relevant settings to this directory too (second part).
But I don't think you should do that, setting apache to serve the 4.0/ directory with httpd.conf makes a mess for serving the other directories. I'd suggest you read about redirects and how to implement them with whatever language you're using. Then you can point one URL to another without it ever being noticed in the browser (unless they're really trying to).
So without changing DocumentRoot from "/var/www", you can edit /var/www/index.php (or whatever) and have it redirect to /var/www/4.0/. The same can be done in /var/www/foobar/index.php to display /var/www/moo/ instead, but here I'd really just rename the "foobar" directory on the server to "moo". If you want to get elaborate, look into mod_rewrite, but I'd advise you to try all your alternatives first and only use it if you really need to, it's quite a complex tool.
I'm starting up a new web-site, and I'm having difficulties enforcing my desired file/folder organization:
For argument's sake, let's say that my website will be hosted at:
http://mywebsite.com/
I'd like (have set up) Apache's Virtual Host to map http://mywebsite.com/ to the /fileserver/mywebsite_com/www folder.
The problem arises when I've decided that I'd like to put a few files (favicon.ico and robots.txt) into a folder that is ABOVE the /www that Apache is mounting the http://mywebsite.com/ into
robots.txt+favicon.ico go into => /fileserver/files/mywebsite_com/stuff
So, when people go to http://mywebsite.com/robots.txt, Apache would be serving them the file from /fileserver/mywebsite_com/stuff/robots.txt
I've tried to setup a redirection via mod_rewrite, but alas:
RewriteRule ^(robots\.txt|favicon\.ico)$ ../stuff/$1 [L]
did me no good, because basically I was telling apache to serve something that is above it's mounted root.
Is it somehow possible to achieve the desired functionality by setting up Apache's (2.2.9) Virtual Hosts differently, or defining a RewriteMap of some kind that would rewrite the URLs in question not into other URLs, but into system file paths instead?
If not, what would be the preffered course of action for the desired organization (if any)?
I know that I can access the before mentioned files via PHP and then stream them - say with readfile(..), but I'd like to have Apache do as much work as necessary - it's bound to be faster than doing I/O through PHP.
Thanks a lot, this has deprived me of hours of constructive work already. Not to mention poor Apache getting restarted every few minutes. Think of the poor Apache :)
It seems you are set to using a RewriteRule. However, I suggest you use an Alias:
Alias /robots.txt /fileserver/files/mywebsite_com/stuff/robots.txt
Additionally, you will have to tell Apache about the restrictions on that file. If you have more than one file treated this way, do it for the complete directory:
<Directory /fileserver/files/mywebsite_com/stuff>
Order allow,deny
Allow from all
</Directory>
Can you use symlinks?
ln -s /fileserver/files/mywebsite_com/stuff/robots.txt /fileserver/files/mywebsite_com/stuff/favicon.ico /fileserver/mywebsite_com/www/
(ln is like cp, but creates symlinks instead of copies with -s.)