Recovering Apache from a mounted, unavailable NFS Mount - apache

I have several web applications in production that utilize NFS mounts to share resources (usually static asset files) among web heads. In the event that an NFS mount becomes unavailable, Apache will hang requesting files that cannot be accessed, the kernel will log:
Nov 2 14:21:20 server2 kernel: nfs: server server1 not responding, still trying
I reproduced the behavior in RHEL5 running NFS v3 and Apache 2.2.3:
Create an NFS Mount on Server1 (contents of my /etc/exports)
/srv/test_share server2(rw)
Mount the NFS share on Server2 (contents of my /etc/fstab)
server1:/srv/test_share /mnt/test_share nfs defaults 0 0
Setup a virtual host in Apache with a simple HTML file referencing image files stored on the NFS sharen
Load the site, the html and image files all return 200
Unmount the NFS Share, loading the page returns 404s for the images referenced
Remount the NFS Share
Simulate an NFS crash by turning NFS off on Server1 - reloading the site hangs retrieving the referenced files.
Internet searches so far have not turned up a good solution. Basically the desired behavior would be for the web server to return 404s and not hang until the NFS mount recovers.
Cheers,
Ben

couple of options:
get your nfs mount options right, you need to do a soft mount so nfs access can be interupted. try soft,intr,timeo=10 instead of default
sync your document roots with something else like rsync, or script yourself a semi-atomatic checkout/export from your SCM, if you use one. SCM use is recommended anyway, gives you the possibility to revert to the last working version, for instance
use a real distributed filesystem (preferably fault tolerant like coda) or even a distributed block device system like drdb
option 2 and 3 give you disconnected operation and are therefore much more robust than nfs. drdb is sexy, but my advice would be option 2 with somwething like git or svn, simple and robust

I would not directly serve from the NFS mount, but instead from your local filesystem.
It wouldn't be too hard to setup a cron job that synced the NFS mount to the local file system every few minutes. Apache would serve its content from there, not depending on the NFS mount. If the mount goes down, Apache would still be able to serve the assets, although they might be out of date until the NFS mount comes back up.

Related

How to get list of mounted filesystems on NFS server

For auditing purposes I need to track all remotely mounted NFSv4 filesystems requests on an NFS server (CentOS7) to get both the identity of the mounting system AND the filesystem that they mounted. Using the 'netstat -an' command gets me the identity of the remote system but now I need to know what they mounted. It also gives no clue as to whether that system unmounted a file and then mounted a different one.
I have seen various references to both 'rmtab' and 'showmount' but they do not show me the currently mounted files and, from what I can see, they are only good for NFSv3 and older mounts. I have also seen reference to the file /proc/fs/nfsd/clients but cannot see such a file on any of my servers. Surely the information as to who has what mounted has to be available somewhere in the server even if it is a convoluted path to get there (auditing nfsservctl syscalls worked in olden days.)
Related to that, 'ps' shows me the '[nfsv4.1-svc]' process but I haven't been able to track down who/what/why that is and if it is useful.

Proposal to Migrate OpenNebula Datastore from Local FS to NFS

I have an instance of OpenNebula with 2 nodes running KVM and local file store. This means no live migration as vm images are scp'd to each node, so there is also no option of failover or Live Migration.
I would like to implement NFS shared storage and move the VM's from the local FS datastore to the NFS shared storage datastore. OpenNebula supports migrating VM's between datastores, but only datastores of the same type i.e. 'ssh' to 'ssh' but not 'ssh' to 'shared'.
I am working on a method of achieving this, and would love some feedback as to why this is a good or a bad idea.
Thanks
OpenNebula doesn't currently support migrating VM's from one type of datastore to another different type of datastore. I have been working on a method that is working and want to document it here to get some feedback and opinions on the method.
A datastore type is identified primarily by the Transfer manager Driver 'TM_MAD' setting. This setting cannot be changed, either through Sunstone or through the cli. So we need a method to do just this. This is what i did. I started with a fresh install of OpenNebula 5.4.13 in one VM, and 2 VM nodes all running Debian 9 within VMware virtual machines (don't forget to check virtualisation for the VM CPU options).
NOTE: This is an experimental process so make sure you Backup everything first!
Steps
To migrate to a different store, there are a few steps we need to do. They are as follows:
Setup the NFS share exports,
Move the VM images to the NFS share and mount the datastore,
Change the datastore types,
Configure the nodes for NFS share.
Setup NFS Server
First thing we want to do is setup the NFS shares that we want to use. I'm using a single share for the base datastore folder, but you could use separate shares for each datastore ID from different NFS servers.
On the NFS Server create the datastore folder i.e. mkdir /share/one_datastore,
Add the datastore path to exports and export the new share exportfs -rav,
Confirm the share is available showmount -e localhost
Prepare to Migrate
Before we modify the datastores there are a few things to do first:
Shut down any running VM's and undeploy them. This saves the machines states and copies the images back to the image store,
Stop Sunstone and OpenNebula services systemctl stop opennebula && systemctl stop opennebula-sunstone.
Migrate Data
Shared storage shares the VM disk images so all the nodes can access the same data. So copy the VM data to the NFS share ready for mounting.
From the Sunstone frontend server confirm the NFS shares showmount -e [nfs-server],
Create a temp folder to mount the share in mkdir /mnt/datastore,
Temporarily mount the NFS folder mount [nfs-server]:/share/one_datastore /mnt/datastore,
Move the datastore folders to the share mv /var/lib/one/datastores/* /mnt/datastore/
OpenNebula datastore folders now live on the NFS server: ls /mnt/datastore should list folders 0, 1 and 2,
Mount the NFS share to replace the OpenNebula datastore folder mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the folders are available ls /var/lib/one/datastores should list our 3 folders 0, 1 and 2,
Add the mount into /etc/fstab to persist the mount on boot.
OpenNebula frontend is now configured to access the datastore folders from the NFS share. Next we want to change the datastores type from ssh to shared.
Change Datastore Types
The data for the datastore configuration is stored in the OpenNebula database /var/lib/one/one.db. We can change the driver type by editing the datastore configuration data which then tells OpenNebula whiche drivers to use, and how to handle the datastore data. By default OpenNebula uses an sqlite database with the option of MySql. i'm using sqlite but the same works for MySql.
Open the OpenNebula database sqlite3 /var/lib/one/one.db,
View all tables with .tables. datastore_pool is the table we want to modify,
List all the records in the table select * from datastore_pool; will result in a screen-full of configuration data. Each record has an identifier oid which matches the datastore ID, like this (the first 0 is the datastore ID for the default SYSTEM database):
0|system|<DATASTORE><ID>0</ID><UID>0</UID><GID>0</GID><UNAME>oneadmin</UNAME><GNAME>oneadmin</GNAME><NAME>system</NAME><PERMISSIONS><OWNER_U>1</OWNER_U><OWNER_M>1</OWNER_M><OWNER_A>0</OWNER_A><GROUP_U>1</GROUP_U><GROUP_M>0</GROUP_M><GROUP_A>0</GROUP_A><OTHER_U>0</OTHER_U><OTHER_M>0</OTHER_M><OTHER_A>0</OTHER_A></PERMISSIONS><DS_MAD><![CDATA[-]]></DS_MAD><TM_MAD><![CDATA[ssh]]></TM_MAD><BASE_PATH><![CDATA[/var/lib/one//datastores/0]]></BASE_PATH><TYPE>1</TYPE><DISK_TYPE>0</DISK_TYPE><STATE>0</STATE><CLUSTERS><ID>0</ID></CLUSTERS><TOTAL_MB>0</TOTAL_MB><FREE_MB>0</FREE_MB><USED_MB>0</USED_MB><IMAGES></IMAGES><TEMPLATE><ALLOW_ORPHANS><![CDATA[NO]]></ALLOW_ORPHANS><DISK_TYPE><![CDATA[FILE]]></DISK_TYPE><DS_MIGRATE><![CDATA[YES]]></DS_MIGRATE><RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS><SAFE_DIRS><![CDATA[/var/tmp]]></SAFE_DIRS><SHARED><![CDATA[NO]]></SHARED><TM_MAD><![CDATA[ssh]]></TM_MAD><TYPE><![CDATA[SYSTEM_DS]]></TYPE></TEMPLATE></DATASTORE>|0|0|1|1|0
Now to change the datastore type. Grab the data from the 3rd column body
(You can run select body from datastore_pool where oid=0;) and copy to your favourite text editor (that's the chunk starting with <DATASTORE> and ending with </DATASTORE>). Find and replace:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace with: <TM_MAD><![CDATA[shared]]></TM_MAD>
Find: <SHARED><![CDATA[NO]]></SHARED>
Replace with: <SHARED><![CDATA[YES]]></SHARED>
Now to update the SYSTEM datastore record. Run the following command on the database, replacing [datastore-config] with the text block you just modified update datastore_pool set body='[datastore-config]' where oid=0,
Update IMAGE datastore is a little different. There is no SHARED option, but we want to use either shared or qcow2 drivers. I used qcow2. So: select body from datastore_pool where oid=1;:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace: <TM_MAD><![CDATA[qcow2]]></TM_MAD>
Update the record: update datastore_pool set body='[datastore-config]' where oid=1;,
Update the FILES datastore (oid=3) by replacing <TM_MAD><![CDATA[ssh]]></TM_MAD> with <TM_MAD><![CDATA[shared]]></TM_MAD> and update using the method above.
Now that the datastores have been updated to use the shared driver, lets start Sunstone and check that the datastores show up.
systemctl start opennebula && systemctl start opennebula-sunstone
Jump into Sunstone web and go to datastores. Opening each datastore to check whether SHARED is enabled, and the correct drivers show i.e. shared or qcow2.
~DONT DO ANYTHING YET~ Still need to configure the nodes!
Configure the Nodes
So because we stopped and undeployed the VMs, there shouldn't be any data in the node datastores. So we can just set up NFS shares to the datastores folder. Confirm the folders are empty first and make sure to take backups! This is an experimental process so be warned! Right, lets get onto it:
Check the contents of /var/lib/one/datastores. If you are mounting each datastore ID based folder to its own NFS share then you can do this instead of the entire datastore folder. Empty any folders with 0, 1 and 2 folders. otherwise remove all folders from the datastores folder,
If not already installed: apt-get install nfs-common,
Check for NFS shares: showmount -e [nfs-server],
Mount the nfs share to the datastore folder: mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the mount i.e. df,
Edit /etc/fstab adding the mount so its mounted on next boot.
Restart your node to confirm the datastore nfs persists, and to give them a restart!
Repeat with all host nodes.
Test it Out
In Sunstone go to the Hosts TAB and check they are up and running. Next go and grab a VM and deploy it. It should deploy without any issues and start booting.
Once up and running i like to constantly ping the VM while testing live migration. So start ping (ping [vm-ip] -t in windows) and then in Sunstone open the VM and do a 'Live Migrate' to another node. Watch the ping and check the logs to make sure it succeeded. I found i had to refresh the display, and go to the hosts TAB to check the VM had migrated. After that it showed correctly but i think its a caching issue in my browser. After the Live Migration you should still see the ping rolling along, with maybe one failed ping in the results.
Conclusion
So that's the process i used to migrate from ssh local storage to shared storage. I'v tested it and it is working without any issues. However, if you do have any issues or have an opinion on this process please let me know. If there are any pitfalls with this i have overlooked please also let me know.
Ok, have fun with it. I'm off to try moving the shared storage over to some kind of shared cluster like Ceph or GlusterFS!

Unable to mount nfs share using autofs on solaris10

I am trying to mount an nfs share to a solaris 10 machine at bootup without any luck so far.
The nfs share is accessible and mounts without any problems if I do so manually from the command line (mount -F nfs server_hostname:/exported_dir_path/ /mnt/tmpdir). But I don't know how to tell autofs to mount it at bootup.
We have another machine on the same network (also solaris) that has it working, but I can't figure out how is it configured differently from the non-working one.
I googled the problem and found that /etc/vfstab, /etc/auto_master, and /etc/hosts files need to have proper entries to make this work. I compared these files from the non-working machine with the ones on working machine but did not notice any differences.
Could someone please guide me to properly configure autofs to mount nfs shares on a solaris10 machine?
Thanks,
Aashish.

Permission problems on a mounted volume with Apache

So, I have a Mac Snow Leopard server (Server A) and I've been using a self-built Apache for it, but it's been acting up lately and I want to use the built in. But since this is a production server, I want to test it out first, mounting the appropriate directories on my second server (Server B) and testing it.
So I mount the "/Atlas" directory (my entire CMS) of Server A on Server B with this command:
mount_hfs afp://username:password#server_a/Atlas /Atlas
After having manually created the /Atlas directory.
Now, when pointing a virtual host to have DOCUMENT_ROOT at "/Atlas/Sites/sandman/" (which is the correct path for that site on Server A) and surfing to the site, Apache reports a 403 (Access forbidden) and says it can't read the file ("You don't have permission to access the requested object. It is either read-protected or not readable by the server.")
Now, the files are owned by user "sandman" on both machines, and Apache on Server A is run by user "sandman", but on the built in Apache on Server B it is owned by user "_www" with UID 70. The files are readable by "world" so user _www SHOULD be able to read them just fine.
Anyone knows what the problem may be? I was hoping that I could perhaps store the CMS files on Server C (i.e. a third server) and mount it on both servers and then load balance between them.
Any ideas? Thanks!
Check that you can really read the files as user _www and that you can list them.
Maybe you're missing a diectory listing right for user _www. It's the execution right on directories for *Nix systems.
What user did you run the mount command as? (note: I assume it's really mount_afp, not mount_hfs.) That user will wind up "owning" the server connection, and will be the only one that gets authenticated access to the server files; other users on the AFP client computer will get the equivalent of guest access to server files. You can view the connection ownership with the mount command:
$ mount
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
afp_0TQ55t0XgDP800dNMO0Pyetl-1.2d00000a on /Volumes/Public (afpfs, nodev, nosuid, mounted by gordon)
From your description, it sounds like it should be working despite this (since the files are world-readable on server B)... but it still might be worth performing the mount under the _www user ID.

Is there a way to check if a directory exists in Apache configuration files?

Is there a way to include configuration settings in Apache based on if a directory exists? Basically I have a portable hard drive that I transport between work and home that has some stuff I'm developing on it. I only want the Apache config to load a particular virtual host if the folder exists.
Since Apache 2.4.34 you can now use <IfFile>...</IfFile> which will check to see if a file exists. There's more details on the <IfFile> page.
No, there seems to be no direct way to do this.
The only thing that might be a solution is the IfDefine directive. You can define defines using the -d parameter to when the server is started.
The parameter-name argument is a define as given on the httpd command line via -Dparameter-, at the time the server was started.
You might be able to check for the existence of a directory in a batch or bash file, and set the -d parameter accordingly.
Whether that is an option, will depend on how your server is started from the portable hard drive.
I've come up with a solution that seems to work for Linux and OS X, and it hinges on "mountpoints". It might be possible to emulate it within Windows, as well, but you would probably have to get creative with FUSE and/or Cygwin.
If you create an empty folder in your home directory, such as "/Users/username/ExtraVhosts", you can add an apache directive to "Include /Users/username/ExtraVhosts/*".
Then, when you insert your thumb drive, you can mount somewhere and then use mountpoint "binding" to cross-link the ExtraVhosts folder to a folder on the mobile device.
An OS X example:
I have a thumb drive called 'Cherrybomb'
When I insert it, it always gets mounted to /Volumes/Cherrybomb
I can then use bindfs (sudo port install bindfs) to mount a subfolder of it, like so:
sudo bindfs /Volumes/Cherrybomb/Projects/vhosts /Users/username/ExtraVhosts
Then I can restart apache to read in the updated configuration:
sudo /opt/local/apache2/bin/apachectl restart
At that point, it's just a matter of adding entries in /etc/hosts for server aliases to get picked up.
The linux equivalent would be using the "--bind" parameter of the mount command.
One caveat: This makes it difficult to quickly unmount the USB drive, since it is always marked as "in use" by apache. Here's a removal procedure:
Close all open files and terminal sessions that are using the drive (the present-working-directory in terminal can cause unmount issues)
Stop apache: sudo /opt/local/apache2/bin/apachectl stop
umount /Users/username/ExtraVhosts
Then you can either unmount it graphically or manually (umount /Volumes/Cherrybomb).
If your work and home machines mount the drive to different locations, you could have multiple vhosts folders - home_vhost, work_vhost, etc - and use that in the binding step.
I hope this helps someone out :)
If you point apache to the mountpoint only there shouldn't be an issue. Just don't point Directory directives to directories within the drive.
eg, if you mount /dev/somedisk /mnt/somevhost, the
/mnt/somevhost directory will be there whether or not you have the drive mounted and apache will start. Apache doesn't care if the directory is empty so a <Directory "/mnt/somevhost"/> won't cause server to not start if the drive isn't mounted.
Work with UNIX not against it :-p This solution should be sufficient for development.