Configure Apache to use different Unix User Accounts (www-data) per Site - apache

An Apache 2.x Webserver with default configurations from the ubuntu/debian repositories will use the www-data unix account for apache2 processes handling web requests. Assuming that apache is serving two different sites (domain1.com and domain2.com), is it possible for apache to use unix user www-data1 when handling requests to domain1.com, and use unix user www-data2 when handling requests to domain2.com? The motivation is to isolate the code for each domain name from one another.

Take a look at suEXEC.

suPHP is also a nice thing to look into:
"suPHP is a tool for executing PHP scripts with the permissions of their owners. It consists of an Apache module (mod_suphp) and a setuid root binary (suphp) that is called by the Apache module to change the uid of the process executing the PHP interpreter."
-http://www.suphp.org/

You can use apache2-mpm-itk to achieve this.
You will be able to run each vhost using group and user of your choice.
Check this article for details:
http://www.howtoforge.com/running-vhosts-under-separate-uids-gids-with-apache2-mpm-itk-on-ubuntu-9.04
I used this on my development machine (Ubuntu), If you're using for production please read this page carefully:
http://mpm-itk.sesse.net/

Related

List of served files in apache

I am doing some reverse engineering on a website.
We are using LAMP stack under CENTOS 5, without any commercial/open source framework (symfony, laravel, etc). Just plain PHP with an in-house framework.
I wonder if there is any way to know which files in the server have been used to produce a request.
For example, let's say I am requesting http://myserver.com/index.php.
Let's assume that 'index.php' calls other PHP scripts (e.g. to connect to the database and retrieve some info), it also includes a couple of other html files, etc
How can I get the list of those accessed files?
I already tried to enable the server-status directive in apache, and although it is working I can't get what I want (I also passed the 'refresh' parameter)
I also used lsof -c httpd, as suggested in other forums, but it is producing a very big output and I can't find what I'm looking for.
I also read the apache logs, but I am only getting the requests that the server handled.
Some other users suggested to add the PHP directives like 'self', but that means I need to know which files I need to modify to include that directive beforehand (which I don't) and which is precisely what I am trying to find out.
Is that actually possible to trace the internal activity of the server and get those file names and locations?
Regards.
Not that I tried this, but it looks like mod_log_config is the answer to my own question

How to run Tomcat in a most secure way?

We are using Apache Tomcat 7 for my web applications and we have decided to go on production stage.
So now is the time to think about how to secure the Tomcat and the machine. After reading "Apache tomcat security considerations" we decided to go on run tomcat process on dedicated user with minimum scenario.
From what I understand the best option is to configure it in a way that the running tomcat process has only read privilege to all the tomcat files.
I figured I would do it in this way:
I would create 2 users:
-tomcat_process - only for running tomcat
-admin - this is the one all the files belong to
tomcat_process will have access to conf directory, and also will be able to run scripts from tomcat/bin/
My main problem is that Tomcat needs to write to some files in $CATALINA_HOME/$CATALINA_BASE. I know I can change the location of logs and work directory and I thought I would point them to tomcat_process home dir (is this even a good idea?).
But I can't find any information if I can change the path to /conf/Catalina dir. Is it possible?
I would like to avoid adding write access to conf directory, as the whole configurations sits in there.
Or do you think that I should live those directories where their are and just add write privileges to them for tomcat_process?
I was wondering if you could please tell me if this is a correct approach or can I do it better?
I'm so confused with all those security guides which are telling me to restrict privileges but not telling how to do it :(
Keeping it simple I think is the key:
Create a new tomcat for each (set of) web application(s) with their own user.
Limit the tomcat resources to only the tomcat user. In linux you can use the chmod/chown command for this.
Place the tomcat behind a reverse proxy: Internet (https) <- external Firewall -> Apache Reverse Proxy <- Internal Firewall (block all unless whitelisted) --> Tomcat
Delete all standard webapps 'manager', 'root', 'docs'
Disable the shutdown command in server.xml
As for java web applications try to contain them in their own sandbox, meaning own database, own users.
To safe maintenance effort, you could run multiple instances using one tomcat binary and a single tomcat user.
http://www.openlogic.com/wazi/bid/188102/How-to-Run-Multiple-Instances-of-Tomcat-on-a-Single-Server

Trying to Properly configure the mod alias in Apache

I'm running apache 2.2.24 on Max OS X 10.9.1. Currently, we have a network drive that we access all of our Git repos on at /Volumes/GitWebsites. I would like to configure Apache to serve our PHP based repos from that directory. So, localhost (or 127.0.0.1)/phpsite1/ or /phpsite2? etc. will serve sites from /Volumes/GitWebsites/phpsite1/ or /phpsite2/ in the browser. My two questions are:
Do I simply modify the server root or do I need to use the mod-alias in the httpd.conf file?
What are the permission setting I need to in order for apache to access /Volumes/GitWebsites ?
I've done configuration changes like this in IIS 7.5 and set up a NodeJS dev environment but still new to make large scale changes to Apache. Thanks for any help given.
If you are happy with serving the contents of /Volumes/GitWebsites as it is then it should be fine to point the document root at it. It's also makes it easy to add sites later.
However this could be troublesome later if you want to manage php configuration later on for the sites separately.

Executing scripts as apache user (www-data) insecure? How does a contemporary setup look like?

My scripts (php, python, etc.) and the scripts of other users on my Linux system are executed by the apache user aka "www-data". Please correct me if I'm wrong, but this might lead to several awkward situations:
I'm able to read the source code of other users' scripts by using a script. I might find hardcoded database passwords.
Files written by scripts and uploads are owned by www-data and might be unreadable or undeleteable by the script owner.
Users will want their upload-folders to be writeable by www-data. Using a script I can now write into other users upload directories.
Users frustrated with these permission problems will start to set file and directory permissions to 777 (just take a look at the Wordpress Support Forum…).
One single exploitable script is enough to endanger all the other users. OS file permission security won't help much to contain the damage.
So how do people nowadays deal with this? What's a reasonable (architecturally correct?) approach to support several web-frameworks on a shared system without weakening traditional file permission based security? Is using fastCGI still the way to go? How do contemporary interfaces (wsgi) and performance strategies fit in?
Thanks for any hints!
as far as i understand this, please correct me if i am wrong!
ad 1. - 4. with wsgi you have the possibility to change and therefor restrict the user/group on per process-basis.
http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess
ad 5. with wsgi you can isolate processes.
http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIProcessGroup
quote from mod_wsgi-page:
"An alternate mode of operation available with Apache 2.X on UNIX is 'daemon' mode. This mode operates in similar ways to FASTCGI/SCGI solutions, whereby distinct processes can be dedicated to run a WSGI application. Unlike FASTCGI/SCGI solutions however, neither a separate process supervisor or WSGI adapter is needed when implementing the WSGI application and everything is handled automatically by mod_wsgi.
Because the WSGI applications in daemon mode are being run in their own processes, the impact on the normal Apache child processes used to serve up static files and host applications using Apache modules for PHP, Perl or some other language is much reduced. Daemon processes may if required also be run as a distinct user ensuring that WSGI applications cannot interfere with each other or access information they shouldn't be able to."
All your point are valid.
Points 1, 3 and 5 are solved by setting the open_basedir directive in your Apache config.
2 and 4 are truly annoying, but files uploaded by an web-application is also (hopefully) removable with the same application.

Configuring Drupal to work with an existing webapp

I have an existing web application which I have been building with an ant script and deploying as a .war file to Tomcat.
I am trying to add Drupal to my current technology stack to provide CMS and general UI-related functionality so that I don't have to write my html pages by hand and rather use templates.
During the installation of Drupal7, some of the instructions suggest that I go to this directory:
/etc/apache2/sites-available
and change the DocumentRoot to
/home/myuser/drupal/drupal7
If I make the docroot a basic directory on the file system, how will this impact how the application will work? In addition to Apache, I also have Tomcat server. My goal is to get them to all play nice together. How is this best accomplished?
If I make the docroot a basic directory on the filesystem
I'm not sure what you mean by this. There's no qualitative difference between /var/www and /home/mysuser/drupal/drupal7. The latter is longer and in the user's home directory, but assuming this user would be administering the service anyway that doesn't matter.
Next, the best way to make Tomcat and Apache get along is probably to run one of them on different subdomains. You could use the same domain, but that'd mean you had to run one of the daemons off a nonstandard port and that looks strange and might run into firewall trouble with some users.