Permission problems on a mounted volume with Apache - apache

So, I have a Mac Snow Leopard server (Server A) and I've been using a self-built Apache for it, but it's been acting up lately and I want to use the built in. But since this is a production server, I want to test it out first, mounting the appropriate directories on my second server (Server B) and testing it.
So I mount the "/Atlas" directory (my entire CMS) of Server A on Server B with this command:
mount_hfs afp://username:password#server_a/Atlas /Atlas
After having manually created the /Atlas directory.
Now, when pointing a virtual host to have DOCUMENT_ROOT at "/Atlas/Sites/sandman/" (which is the correct path for that site on Server A) and surfing to the site, Apache reports a 403 (Access forbidden) and says it can't read the file ("You don't have permission to access the requested object. It is either read-protected or not readable by the server.")
Now, the files are owned by user "sandman" on both machines, and Apache on Server A is run by user "sandman", but on the built in Apache on Server B it is owned by user "_www" with UID 70. The files are readable by "world" so user _www SHOULD be able to read them just fine.
Anyone knows what the problem may be? I was hoping that I could perhaps store the CMS files on Server C (i.e. a third server) and mount it on both servers and then load balance between them.
Any ideas? Thanks!

Check that you can really read the files as user _www and that you can list them.
Maybe you're missing a diectory listing right for user _www. It's the execution right on directories for *Nix systems.

What user did you run the mount command as? (note: I assume it's really mount_afp, not mount_hfs.) That user will wind up "owning" the server connection, and will be the only one that gets authenticated access to the server files; other users on the AFP client computer will get the equivalent of guest access to server files. You can view the connection ownership with the mount command:
$ mount
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
afp_0TQ55t0XgDP800dNMO0Pyetl-1.2d00000a on /Volumes/Public (afpfs, nodev, nosuid, mounted by gordon)
From your description, it sounds like it should be working despite this (since the files are world-readable on server B)... but it still might be worth performing the mount under the _www user ID.

Related

cPanel - files are not showing in File Manager after copying via SSH / scp

I have just purchased a dedicated server from a UK hosting company that uses cPanel and I have root access
I am using scp to copy a huge (> 2tb) website from another hosting company (1&1 IONOS using Plesk not that it should make any difference)
The files are copying over .. using SSH I can use the "ls" command to list all the files that I've copied over
However, when I use the File Manager option via cPanel interface, I can see the first folder name on the left hand side (i.e. public_html/my-copied-site) but on the right hand window it shows the directory as empty
If I use the "ls" command, I can see the files & folders
if I try an access any of the files directly via a web browser then I get a 403 Forbidden message
What have I done wrong?
The answer to this problem is the ownership of the folder
Using scp over SSH meant that I was logged in as "root" and therefore the owner of the folders was also "root"
Changing the owner of the folder (using "chown" command) to the account's name resolved the problem
Hope this helps someone out

Oracle ZFS chown command not permitted

After successfully mounting the directory (ZFS remote storage) from one of the server, I'm getting an "Operation not permitted" error when I try changing the ownership of the directory. I'm using the following command:
To mount the remote directory:
mount -t nfs 10.1.32.33:/dir/temp/tools /home/materials
After mounting the directory, the contents are belongs to nobody:nobody
I want to change ownership so I can run the installer inside the directory.
I'm using the command below to change ownership but it's not working:
chown -R otm:otm materials/
I can always upload the file to the server without using the ZFS storage, however I want to start making a central installer repository so I don't need to upload the files/installers for future server install. I appreciate your help guys.
NFS servers by default do not allow root access to files - root is normally mapped to "nobody".
See "root squash":
Root squash[2][3] is a reduction of the access rights for the remote
superuser (root) when using identity authentication (local user is the
same as remote user). It is primarily a feature of NFS but may be
available on other systems as well.
This problem arises when a remote file system is shared by multiple
users. These users belong to one or multiple groups. In Unix, every
file and folder normally has separate permissions (read, write,
execute) for the owner (normally the creator of the file), for the
group to which the owner belongs, and for the "world" (all other
users). This allows restriction of read and write access only to the
authorized users while in general the NFS server must also be
protected by firewall.
A superuser has more rights than an ordinary user, being able to
change the file ownership, set arbitrary permissions, and access all
protected content. Even users that do need to have root access to
individual workstations may not be authorized for the similar actions
on a shared file system. Root squash reduces rights of the remote
root, making one no longer superuser. On UNIX like systems, root
squash option can be turned on and off in /etc/exports file on a
server side.
After implementing the root squash, the authorized superuser performs
restricted actions after logging into an NFS server directly and not
just by mounting the exported NFS folder.
In general, you DO NOT want to disable root squash unless you REALLY know what you're doing as there are serious security issues you can create if you do that. And since you didn't even know it exists...
(And that mention of /etc/exports is an extremely limited statement that is wrong on many systems - like Solaris.)

Structuring a central Git remote to serve production Apache sites

I've been referring to http://toroid.org/ams/git-website-howto as a starting point for a production web server running on a managed VPS. The VPS runs cPanel and WHM, and it will eventually host multiple client websites, each with its own cPanel account (thus, each with its own Linux user and home directory from which sites are served). Each client's site is a separate Git repository.
Currently, I'm pushing each repository via SSH to a bare repo in the client's home folder, e.g. /home/username/git/repository.git/. As per the tutorial above, each repo has been configured to checkout to another directory via a post-receive hook. In this case, each repo checks out to its own /home/username/public_html (the default DocumentRoot for new cPanel accounts), where the files are then served by Apache. While this works, it requires me to set up (in my local development environment) my remotes like this:
url = ssh://username#example.com/home/username/git/repository.git/
It also requires me to enter the user's password every time I push to it, which is less than ideal.
In an attempt to centralize all of my repositories in one folder, I also tried pushing to /root/git/repository.git as root and then checking out to the appropriate home directory from there. However, this causes all of the checked-out files to be owned by root, which prevents Apache from serving the site, with errors like
[error] [client xx.xx.xx.xx] SoftException in Application.cpp:357: UID of script "/home/username/public_html/index.php" is smaller than min_uid
(which is a file ownership/permissions issue, as far as I can tell)
I can solve that problem with chown and chgrp commands in each repo's post-receive hook--however, that also raises the "not quite right" flag in my head. I've also considered gitosis (to centralize all my repos in /home/git/), but I assume that I'd run into the same file ownership problem, since the checked-out files would then be owned by the git user.
Am I just approaching this entire thing the wrong way? I feel like I'm completely missing a third, more elegant solution to the overall problem. Or should I just stick to one of the methods I described above?
It also requires me to enter the user's password every time I push to it, which is less than ideal
It shouldn't be necessary if you publish your public ssh key to the destintion account ".ssh/authorized_keys" file.
See also locking down ssh authorized keys for instance.
But also the official reference Pro Git Book "Setting Up the Server".

Is it possible to have WAMP run httpd.exe as user [myself] instead of local SYSTEM?

I run a django application over apache with mod_wsgi, using WAMP.
A certain URL allows me to stream the content of image files, the paths of which are stored in database.
The files can be located whether on local machine or under network drive (\\my\network\folder).
With the development server (manage.py runserver), I have no trouble at all reading and streaming the files.
With WAMP, and with network drive files, I get a IOError : obviously because the httpd instance does not have read permission on said drive.
In the task manager, I see that httpd.exe is run by SYSTEM. I would like to tell WAMP to run the server as [myself] as I have read and write permissions on the shared folder. (eventually, the production server should be run by a 'www-admin' user having the permissions)
Mapping the network shared folder on a drive letter (Z: for instance) does not solve this at all.
The User/Group directives in httpd.conf do not seem to have any kind of influence on Apache's behaviour.
I've also regedited : I tried to duplicate the HKLM\[...]\wampapache registry key under HK_CURRENT_USER\ and rename the original key, but then the new key does not seem to be found when I cmd this
> httpd.exe -n wampapache -k start
or when I run WAMP.
I've run out of ideas :)
Has anybody ever had the same issue?
Win+R, services.msc
edit wampapache and wampmysqld to log on as some user.
the tray icon is a convenient front end to "net start wampapache" and "net start wampmysqld"
The User/Group directives in httpd.conf do not seem to have any kind of influence on Apache's behaviour.
httpd.exe is started by the root user (this is probably why you see it running under SYSTEM). The user and group lines in httpd.conf determine what user the child processes (that httpd spawns) will run under. These forks are what actually handle page requests, etc. so it is possible that your configuration is already doing what you want it to, it is just unclear from looking at task manager.
You could also try using runas to start WAMP/Apache, though your mileage may vary.
I've just found that executing httpd.exe myself works for me... I just loose all the funky WAMP tray icon, and the "restart apache" menu item, really handy whenever I update my application code...
I'll have to make do with this for the moment...

IIS 6 - Create a virtual directory that points to an IIS application on a different server?

Here's the scenario:
Server A is hosting the 'main' application (www.example.com)
Server B is hosting a support application (b.example.com)
They are connected internally to each other through a 192.* address and are both externally available through DNS
Server A has several virtual directories that are mapped through UNC shares:
www.example.com/virtual1 -> \192.168.1.1\virtual1 (on serverB)
I'd like to be able to run the application that sits on Server B (served through IIS) and make it appear as if it's running on serverA:
www.example.com/application -> b.example.com/app
I'd still want to be able to access server B directly
b.example.com/app
Any ideas?
Edit:
Turns out the application behind the proxy refused to let me dynamically change it's form "action" (nor did it let me change anything else). I was able to display the data from the server; just couldn't post :(
So both answers pointed me in the right direction. I used a proxy:
http://code.google.com/p/iisproxy/
I created a virtual directory on Server A that matched the directories I needed on Server B - and it worked! :-)
This should be possible in IIS. I remember I had to do this once.
Just create a virtual directory using the UNC path pointing to \\ServerB\SharedAppDirOnB and (if necessary) "Connect As..." using the credentials needed for Server B.
If you have problems with "Connect As..." it could be a folder permissions problem of Server B. Try the following thing: add a new user account on your main server which has the same name and password as the account on Server B. It sounds stupid, but I remember it solved my issue. You could for example add a new user account on both servers: "IisCommon" with the same passwords on both servers. Then make sure you give all necessary file access permission to the folder on server B (and the Share permission!). Try first connecting manually using Windows Explorer if you can access the share.
Make sure that you mark the new virtual directory as application and give the right execution permissions.
Another solution would be some kind of reverse proxy. I used a third-party product on IIS 6.0 for this: ISAPIrewrite for IIS. The "proxy" mode allows you to "forward" request made to your main server (www.example.com/...) to your other server, but serving the resulting responses as if they were processed by your main "domain" application. The feature is called "proxy directive". It accepts regular expressions.
Since serving the virtual directory from server A through a UNC share apparently does not work, you need to serve b.example.com/app from server b.
DNS resolves domain names to IP addresses. You are asking for the same domain name to resolve to two different IP addresses, based on a different URL. This is not something that IIS or Windows can do.
Your options are:
write a proxy service on server A that passes requests on to server B. If you want it completely transparent (not just a redirect), you'd have to stream back the response as well. This is not trivial, but possible.
Put the server B page into an IFRAME on a new page on server A.
Use a load balancer in front of both servers that can split traffic based on URL