Unable to mount nfs share using autofs on solaris10 - nfs

I am trying to mount an nfs share to a solaris 10 machine at bootup without any luck so far.
The nfs share is accessible and mounts without any problems if I do so manually from the command line (mount -F nfs server_hostname:/exported_dir_path/ /mnt/tmpdir). But I don't know how to tell autofs to mount it at bootup.
We have another machine on the same network (also solaris) that has it working, but I can't figure out how is it configured differently from the non-working one.
I googled the problem and found that /etc/vfstab, /etc/auto_master, and /etc/hosts files need to have proper entries to make this work. I compared these files from the non-working machine with the ones on working machine but did not notice any differences.
Could someone please guide me to properly configure autofs to mount nfs shares on a solaris10 machine?
Thanks,
Aashish.

Related

Too many getattr requests for file info when using nfs

I'm sharing a folder using nfs on a Linux system. It's very slow, and I found that there are many getattr requests.
After I mount my shared nfs folder on another machine, I'm trying to copy the files out. It's very slow. What I found via tcpdump is lots of getattr calls happening. Most of the requests are on the same file, which I don't understand either.
This is the mount command I'm using:
mount -t nfs -o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,timeo=600,actimeo=0,nolock,tcp
content of /etc/exportfs:
/dept/nfs *(no_subtree_check,rw,async)
What is the possible reason why there are so many getattr calls?
Is there any configuration problem?
It's because of the mount option: actimeo=0. This disables attribute caching on the client, which will send getattr requests to the server.

How to run grafical tool (e.g. deja-dup) as root on Ubuntu 17.10

When trying to setup automated backup under Ubuntu 17.10 using Deja-Dup I realized that one can not backup the root directory since a normal user starting the deja-dup application does not have all rights to access all files in /.
(german discussion about rather similar situation can be found here: https://forum.ubuntuusers.de/topic/wie-sichert-an-mit-deja-dup-ein-systemverzeich/)
The usual workaround to gksu the deja-dup application does no longer work on Ubuntu 17.10. It seams that a decision has been made to prevent users from starting graphical applications as root on purpose for it is often a bad/risky think to do.
However to create regular backups of a Systems / directory with deja-dup the application has to be configured and later on started as root.
Since the typical ideas like gksu, gksudo, sudo -H do not work unter Ubuntu 17.10 I would highly appreciate any advice on a secure practice to get to run deja-dup as root. Can someone help with advice?

Docker for Win acme.json permissions

Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?
Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.
Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!

Is there a way to check if a directory exists in Apache configuration files?

Is there a way to include configuration settings in Apache based on if a directory exists? Basically I have a portable hard drive that I transport between work and home that has some stuff I'm developing on it. I only want the Apache config to load a particular virtual host if the folder exists.
Since Apache 2.4.34 you can now use <IfFile>...</IfFile> which will check to see if a file exists. There's more details on the <IfFile> page.
No, there seems to be no direct way to do this.
The only thing that might be a solution is the IfDefine directive. You can define defines using the -d parameter to when the server is started.
The parameter-name argument is a define as given on the httpd command line via -Dparameter-, at the time the server was started.
You might be able to check for the existence of a directory in a batch or bash file, and set the -d parameter accordingly.
Whether that is an option, will depend on how your server is started from the portable hard drive.
I've come up with a solution that seems to work for Linux and OS X, and it hinges on "mountpoints". It might be possible to emulate it within Windows, as well, but you would probably have to get creative with FUSE and/or Cygwin.
If you create an empty folder in your home directory, such as "/Users/username/ExtraVhosts", you can add an apache directive to "Include /Users/username/ExtraVhosts/*".
Then, when you insert your thumb drive, you can mount somewhere and then use mountpoint "binding" to cross-link the ExtraVhosts folder to a folder on the mobile device.
An OS X example:
I have a thumb drive called 'Cherrybomb'
When I insert it, it always gets mounted to /Volumes/Cherrybomb
I can then use bindfs (sudo port install bindfs) to mount a subfolder of it, like so:
sudo bindfs /Volumes/Cherrybomb/Projects/vhosts /Users/username/ExtraVhosts
Then I can restart apache to read in the updated configuration:
sudo /opt/local/apache2/bin/apachectl restart
At that point, it's just a matter of adding entries in /etc/hosts for server aliases to get picked up.
The linux equivalent would be using the "--bind" parameter of the mount command.
One caveat: This makes it difficult to quickly unmount the USB drive, since it is always marked as "in use" by apache. Here's a removal procedure:
Close all open files and terminal sessions that are using the drive (the present-working-directory in terminal can cause unmount issues)
Stop apache: sudo /opt/local/apache2/bin/apachectl stop
umount /Users/username/ExtraVhosts
Then you can either unmount it graphically or manually (umount /Volumes/Cherrybomb).
If your work and home machines mount the drive to different locations, you could have multiple vhosts folders - home_vhost, work_vhost, etc - and use that in the binding step.
I hope this helps someone out :)
If you point apache to the mountpoint only there shouldn't be an issue. Just don't point Directory directives to directories within the drive.
eg, if you mount /dev/somedisk /mnt/somevhost, the
/mnt/somevhost directory will be there whether or not you have the drive mounted and apache will start. Apache doesn't care if the directory is empty so a <Directory "/mnt/somevhost"/> won't cause server to not start if the drive isn't mounted.
Work with UNIX not against it :-p This solution should be sufficient for development.

Recovering Apache from a mounted, unavailable NFS Mount

I have several web applications in production that utilize NFS mounts to share resources (usually static asset files) among web heads. In the event that an NFS mount becomes unavailable, Apache will hang requesting files that cannot be accessed, the kernel will log:
Nov 2 14:21:20 server2 kernel: nfs: server server1 not responding, still trying
I reproduced the behavior in RHEL5 running NFS v3 and Apache 2.2.3:
Create an NFS Mount on Server1 (contents of my /etc/exports)
/srv/test_share server2(rw)
Mount the NFS share on Server2 (contents of my /etc/fstab)
server1:/srv/test_share /mnt/test_share nfs defaults 0 0
Setup a virtual host in Apache with a simple HTML file referencing image files stored on the NFS sharen
Load the site, the html and image files all return 200
Unmount the NFS Share, loading the page returns 404s for the images referenced
Remount the NFS Share
Simulate an NFS crash by turning NFS off on Server1 - reloading the site hangs retrieving the referenced files.
Internet searches so far have not turned up a good solution. Basically the desired behavior would be for the web server to return 404s and not hang until the NFS mount recovers.
Cheers,
Ben
couple of options:
get your nfs mount options right, you need to do a soft mount so nfs access can be interupted. try soft,intr,timeo=10 instead of default
sync your document roots with something else like rsync, or script yourself a semi-atomatic checkout/export from your SCM, if you use one. SCM use is recommended anyway, gives you the possibility to revert to the last working version, for instance
use a real distributed filesystem (preferably fault tolerant like coda) or even a distributed block device system like drdb
option 2 and 3 give you disconnected operation and are therefore much more robust than nfs. drdb is sexy, but my advice would be option 2 with somwething like git or svn, simple and robust
I would not directly serve from the NFS mount, but instead from your local filesystem.
It wouldn't be too hard to setup a cron job that synced the NFS mount to the local file system every few minutes. Apache would serve its content from there, not depending on the NFS mount. If the mount goes down, Apache would still be able to serve the assets, although they might be out of date until the NFS mount comes back up.