Im not that familiar with Apache.
When using <Location>, I am able to redirect users to a sign-on page, forcing them to authenticate and have proper privileges before accessing the URL.
When using <Directory>, it is supposed to allow me to control access to specified folders and directories, right?
Question:
How does <Directory> behave similarly and differently from <Location>?
With <Location /web>: www.mysite.com/web and www.mysite.com/web/foo will be controlled.
With <Directory /webforms>: how will www.mysite.com/web look like if some of the scripts are from that folder?
With <Directory /pictures>: how will www.mysite.com/web look like if some of the picture are from that folder?
What about a situation where you have both types of directives active and affecting a single page? What kinds of things should I expect or watch out for?
The Apache HTTP server documentation has a section called What to use When which, I think, directly, answer your question :
Choosing between filesystem containers and webspace containers is actually quite easy. When applying directives to objects that reside in the filesystem always use <Directory> or <Files>. When applying directives to objects that do not reside in the filesystem (such as a webpage generated from a database), use <Location>.
The important part is the following :
It is important to never use <Location> when trying to restrict access to objects in the filesystem. This is because many different webspace locations (URLs) could map to the same filesystem location, allowing your restrictions to be circumvented.
Read on for more information...
Related
I am trying to decide whether to use .htaccess files in each sub-directory to deny all requests for specific files (while also denying directory indexes), or whether it is more security conscious to move all files except for essential files (index.php, .htaccess, robots.txt) outside the root directory and call them from the index file.
Are there any critical differences in security between these two methods for securing files in my web application?
Here is a view of what the .htaccess looks like in the root directory.
# pass the default character set
AddDefaultCharset utf-8
# disable the server signature
ServerSignature Off
<FilesMatch "\.(htaccess|htpasswd|ini|phps|fla|psd|log|sh|lock|DS_Store|json|)$">
order allow,deny
allow from all
</FilesMatch>
# disable directory browsing
Options All -Indexes
# prevent display of select file types
IndexIgnore *.wmv *.mp4 *.avi *.etc
However, this would not stop someone from accessing a file if they knew the directory structure such as https://www.example.com/security/please_dont_look.cfg
Although that file does not print anything, I don't want anyone to know it exists, and don't want a site-specific solution like using modredirect to redirect calls to specific files.
I could use a .htaccess file in each directory such as this:
order deny,allow
deny from all
From this question and reply (Prevent access to files in a certain folder)
Is one solution more bullet-proof than the other?
As always in such complex systems, security here is about having several lines of defense, keeping things simple and attempting to prevent as many attack vectors as possible.
Theoretically both solutions should provide you with the exact same level of security - the files would not be accessible in either case.
I'd recommend moving files that should not be accessed directly into a directory outside of the web root directory. It is quite easy to screw up htaccess files and thats just not possible when you move the files outside of your webroot. This will also prevent timing attacks against the directory structure of your server: reading htaccess files comes with a time penalty and that might be measureable, especially if your htaccess files get big and you have plenty of them for each sub directory. Actually I'd recommend skipping htaccess entirely, just disable indexes directly in your vhost configuation, such that Apache does not have to look for htaccess files at all, speeding up your website.
Additionally, in case you run php via fcgi, you should disallow file access on a file system level for apache and just allow access from php. With this setup it should be outright impossible to access your files by attacking the webserver (excluding php) unless you have some privilege escalation vulnerability (in which case you are screwed anyways).
The only way to access your confidential files in this setup would be to convince PHP to read the file or to mess with the file system, i.e. by creating a hard link from your web root into your "confidential files outside web root"-directory. Preventing against that boils down to ensuring your PHP configuration is as restrictive as possible, file creation inside the webroot is disallowed and, most importantly, ensuring that the PHP application itself is not vulnerable.
My hosting allows use of .htaccess files as the configuration files are not available.
I'm aware of the performance hit that override files incur though - so I was thinking - If Apache provided a mode for having a single .htaccess file - wouldn't that be faster than having to check for multiple .htaccess files whilst still maintaining the convenience?
If Apache provided a mode for having a single .htaccess file
Well, not a "mode" as such, but you could achieve this by allowing .htaccess for the parent directory (root directory or even one above the document root) and disable the use of .htaccess files in all subdirectories. The .htaccess file in the parent directory will still apply.
Realistically (if indeed this is at all "realistic"), you would probably need to enable .htaccess for the directory "above the document root" and disable .htaccess in the document root and below, rather than enabling .htaccess for the document root. Otherwise, if you enable .htaccess for the document root, you will have to disable .htaccess for each subdirectory individually. And if you add more subdirectories, the server config will need to be updated accordingly. (Since, the AllowOverride directive is only allowed in <Directory> containers without regex, not <DirectoryMatch> containers.) However, this might not be possible on some shared hosting environments (there might not be an "above the document root") and it could impact the installation of some CMSs.
Note that you obviously need access to the server config (or VirtualHost) in order to implement this, so it is hypothetical in this instance.
wouldn't that be faster than having to check for multiple .htaccess files
Possibly. But you are only talking about a micro-optimsation at best in real terms. On most sites, even enabling .htaccess files at all will hardly be noticeable - if at all. The "performance hit" you speak of is not as big as you might think. To put it another way, if you are finding that .htaccess is proving to be a bottle neck then you've either done something wrong, or you have far more serious problems to address.
Note, however, that you are generally only using .htaccess files on smaller sites anyway. On larger / high traffic sites you will have your own VPS / Server and access to the server config, so there wouldn't be any need to use .htaccess (or, importantly, have it enabled).
whilst still maintaining the convenience?
Not exactly. Part of the "convenience" is being able to put the .htaccess file in any directory you like, overriding parent directives and have it apply to just that directory tree. (It is the userland equivalent of the <Directory> container in the server config.)
I have a set of Apache configuration settings (in apache.conf) contained within a <Directory /srv/*/public> block which are intended to apply to any public directory.
However, for some of the sites hosted on this server, the public directory is nested one level deeper (so the required path would be /srv/*/app/public), and so the directives in the Directory block are failing to be applied to these, even though I'd like them to be.
I don't want to duplicate the contents of the whole Directory block (it contains quite a lot of directives), but I can't find any way to apply it to two separate paths, and Apache's docs suggest that no use of wildcards can match paths of different depths (since the * doesn't match a /).
What's the best way to do this?
I was browsing the web and came across the following:
Source code, including config files, are stored in publicly accessible directories along with files that are meant to be downloaded (such as static assets). [...] You can use .htaccess to limit access. This is not ideal, because it is insecure by default, but there is no other alternative.
Source: owasp.org
Sometimes I use the following code to prevent access from a specific directory:
// contents of .htaccess
order deny,allow
deny from all
allow from none
On servers where there is access outside of the webroot there is obviously less need to prevent access to folders/files with .htaccess.
Can someone explain why they write ".htaccess is insecure by default" and what are alternative ways to prevent access to certain files on a regular LAMP-stack?
.htaccess is not a complete security solution. It doesn't protect you from ddos, sniffing, or man in the middle (when using auth) without SSL.
As far as denying access to specific files, it's generally fine. The scenarios under which it would fail to do so are scenarios where there has already been a successful exploit somewhere else. Since any files in the directory have to be readable by the process owner, the files are only superficially secured by .htaccess.
Here's the problem: we have a family (approx. 8) of websites, each hosted on a different subdomain of a single domain common to every member of the family. E.g.,
ecommerce.my_domain.com
forums.my_domain.com
signup.my_domain.com
For various reasons, each subdomain is administered separately from the others--i.e., different servers, complete autonomously with respect to the others regarding nearly every development decision, including choice of web framework--for instance., two are Django, one is Zend, and so on (though all run Apache 2.2). We want to fix this, and someday we will, it just won't be anytime soon.
One direct consequence of this structure is that we have multiple Default page names. By 'Default Page' i'm referring to the page the server defaults to when no page on the subdomain is given--sometimes it's 'index.html', sometimes 'index.php', etc. (I know what they are, it's the fact that there are multiple pages that's a problem.)
(The Default page is the webpage to which your server defaults when no page on the domain is specified. For example, if the "index.html" page is served when you enter "www.my_domain.com", "index.html" is the Default page.)
Here's one problem it causes: our analytics code (javascript) will count page views to subdomain1.my_domain.com and subdomain1.my_domain.com/index.html as two separate pages, unless the correct default page is specified.By itself, this can cause a two-fold error in the basic page view measurement. In addition, the analytics system (Google Analytics) only allows a single Default Page to be specified.
After looking into this, it seems one way to do is at the Server (Apache 2.2) : (i) create a CGI directory without using ScriptAlias; (ii) use DirectoryIndex to specify a default document when only the directory is requested.
i suppose this can also be done within the Web Framework that supports each subdomain property, though given we have multiple different frameworks, that option is certainly less appealing.
I would be grateful for the Community's view on the preferred way to do this.
strong text*strong text*
What about using a .htaccess file to handle it?
Just add a line resembling the following with the order you want it to look for files to be your default:
DirectoryIndex index.html default.php default.htm foo.html index.cgi
.htaccess reference docs: http://httpd.apache.org/docs/current/howto/htaccess.html