inotify with NFS - nfs

I've recently created a dropbox system using inotify, watching for files created in a particular directory. The directory I'm watching is mounted from an NFS server, and inotify is behaving differently than I'd expect. Consider the following scenario in which an inotify script is run on machine A, watching /some/nfs/dir/also/visible/to/B.
-Using machine A to create a file in /some/nfs/dir/also/visible/to/B, the script behaves as expected. Using machine B to carry out the same action, the script is not notified about a new file dropped in the directory.
-When the script is run on the NFS server, it gets notified when files are created from both machine A and machine B.
Is this a bug in the bug in the package I'm using to access inotofy, or is this expected behaviour?

inotify requires support from the kernel to work. When an application tracks a directory, it asks the kernel to inform it when those changes occur. When the change occurs, in addition to writing those changes to disk, the kernel also notifies the watching process.
On a remote NFS machine, the change is not visible to the kernel; it happens entirely remotely. NFS predates inotify and there is no network level support for it in NFS, or anything equivalent.
If you want to get around this, You can run a service on the storage server (since that kernel will always see changes to the filesystem) that brokers inotify requests for remote machines, and forward the data to the remote clients.
Edit: It seems odd to me that NFS should be blamed for its lack of support for inotify.
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984, wikipedia article
However:
Inotify (inode notify) is a Linux kernel subsystem that acts to extend filesystems to notice changes to the filesystem. [...] It has been included in the mainline Linux kernel from release 2.6.13 (June 18, 2005 ) [...]. wikipedia article
It's hard to expect a portable network protocol/application to support a specific kernel feature developed for a different operating system and that appeared more than twenty years later. Even if it did include extensions for it, they would not be available or useful on other operating systems.
*emphasis mine in all cases
Another problem with this; Lets suppose we are not using a network at all, but rather, a local filesystem with good inotify support: ext3 (suppose its mounted at /mnt/foo). But instead of a real disk, the filesystem is mounted from a loopback device ; and the underlying file is in turn accessible at a different location in the vfs (say, /var/images/foo.img).
Now, you're not supposed to modify mounted ext3 filesystems, But it's still reasonably safe to do so if the change is to file contents instead of metadata.
So suppose a clever user modifies the file system image (/var/images/foo.img) in a hex editor, replacing a file's contents with some other data, while at the same time an inotify watch is observing the same file on the mounted filesystem.
There's no reasonable way one can arrange for inotify to always inform the watching process of this sort of change. Although there are probably some gyrations that could be take to make ext3 notice and honor the change, none of that would apply to, say, the xfs drtiver, which is otherwise quite similar.
Nor should it. You're cheating!. inotify can only inform you of changes that occured through the vfs at the actual mountpoint being watched. If the changes occured outside that VFS, because of a change to the underlying data, inotify can't help you and isn't designed to solve that problem.
Have you considered using a message queue for network notification?

To anyone who has come across this question in the search for an answer of why bind mounting on Docker will not detect file changes from host directory (for hot reloading of an app), it's because the propagation of file changes between host and container is not communicated to the container kernel.
Only changes from the container itself is communicated to the kernel. Solution for this is to have your live reload utility turn on "polling mode" instead of using fsnotify.

I found an SGI FAM using an supervisor daemon to monitor file modification. It supports NFS and you can see some description on wiki

I agree with SingleNegationElimination's explanation, and would like to add that iSCSI targets will work, since they alert the kernel.
So things on "real" file systems (relative to the system, that is) will trigger Inotify to alert. Like Rsync'ing, net-catting something into a mounted partition.
If you have to get notifications via inotify (or have to use inotify) you can make a cron to rsync -avz over to the file system. Drawbacks of course are that you are using real system hdd space.

I second #SingleNegationElimination.
Also, you can try notify-forwarder.
Machine A watches for local inotify events, then forwards them to Machine B (via UDP).
Machine B doesn't (can't?) replay the events, but fires an ATTRIB event for the changed file.
If you use vagrant, use vagrant-notify-forwarder.

the problem with notify-forwarder is that it does not trigger an inotify event. It uses utime to update the timestamp for the file on the remote system but inotify fails to see this.
AFAIK, the timestamp already gets updated when using an NFS mount. I have verified this myself between a Synology NAS NFS server and a Raspbian NFS mount (client).
Here's my solution / hack on the client:
#!/bin/bash
path=$1
firstmd5=`ls -laR $path | md5sum | awk ' { print $1 }'`
while true
do
lastmd5=`ls -laR $path | md5sum | awk ' { print $1 }'`
if [ $firstmd5 != $lastmd5 ]
then
firstmd5=$lastmd5
echo files changed
fi
sleep 1
done
Granted, this doesn't report on the specific file being changed, but does provide a general notification hook that something's changed.
It's annoying / kludgy but if I needed more details I would do some additional hacking to isolate the actual files changed.

improved the script with action on click and icon
#!/bin/bash
DAT=$(date +%Y%m%d)
CAM="cam1 "
CHEMIN=/mnt/cams/cam1/$DAT/
first="$CHEMIN"
if [ -d "$CHEMIN" ];then
first=`ls -1rt $CHEMIN | tail -n 1`
fi
echo $first
while true
do
if [ -d "$CHEMIN" ];then
last=`ls -1rt $CHEMIN | tail -n 1`
if [ $first != $last ]
then
first=$last
echo $last created
#notify-send -h string:desktop-entry:nautilus -c "transfer.complete" -u critical -i $PWD../QtVsPlayer.png $CAM $last"\n\r"$CHEMIN
reply=$(dunstify -a QtVsPlayer -A 'open,ouvrir' -i "QtVsPlayer" "$CAM $last"\n\r"$CHEMIN")
if [[ "$reply" == "open" ]]; then
QtVsPlayer -s $CHEMIN$last
fi
fi
fi
sleep 5m
done

Related

Way to pass parameters or share a directory/file to a qemu-kvm launched VM on Centos 7.0

I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom

Accessing external hard drive after logging into a remote machine using ssh command

I am doing an intensive computing project with a super old C program. The program requires a library called Sun Performance Library which is a commercial ware. Instead of purchasing the library by myself, I am running the program by logging onto a Solaris machine in our computer lab with the ssh command, while the working directory to store output data is still on my local Mac.
Now, a problem just occurred: the program uses large amount of disk space to save some intermediate results and the space on my local Mac is quickly filled (50 GB for each user prescribed by the administrator). These results are necessary for the next stage of computing and I cannot delete any of them before it finally produce the output data. Therefore, I have to move the working directory to an external hard drive in order to continue. Obviously,
cd /Volumes/VOLNAME
is not the correct way to do it because the remote machine will give me a prompt saying
/Volumes/VOLNAME: No such file or directory.
So, what is the correct way to do it?
sshfs recently added support for "slave mode" which allows you to do this. Assuming you have sshfs on Solaris (I'm not sure about this), the following command (ran from your Mac) will do what you want: dpipe /usr/lib/openssh/sftp-server = ssh SOLARISHOSTNAME sshfs MACHOSTNAME:/Volumes/VOLNAME MOUNTPOINT -o slave
This will result in the MOUNTPOINT directory on the server being mounted to your local external drive. Note that I'm not sure whether macOS has dpipe. If it doesn't, you can replace it with one of the equivalent solutions at How to make bidirectional pipe between two programs?. Also, if your SFTP server binary is somewhere else, substitute its path.
The common way to mount a remote volume in Solaris is via NFS, but that usually requires root permissions.
Another approach would be to make your application read its data from stdin and output its results to stdout, without using the file system directly. Then you could just redirect the data from/to your local machine through ssh. For instance:
ssh user#host </Volumes/VOLNAME/input.data >/Volumes/VOLNAME/output.data

Unison doesn't copy SYSTEM perm (cygwin/windows)

After using crashplan for a while, I noticed that several files aren't being backed up. The files are synced via unison (through cygwin) with another PC and while the *nix permissions are copied correctly, the mirrored file does not have SYSTEM as a user (in windows). Therefore, crashplan can't back it up. Both client and server are running cygwin.
What's the best solution? Can I copy this permission as well with unison? Can I do it with a script (in cygwin or cmd)?
Thanks
Sander
EDIT: To fix it short term I ran an icacls command, but I'm still looking for a way to copy the ACLs via unison whilst syncing.
Relevant section from the Unison Manual:
Permissions
Synchronizing the permission bits of files is slightly tricky when two different filesytems are involved (e.g., when synchronizing a Windows client and a Unix server). In detail, here's how it works:
When the permission bits of an existing file or directory are changed, the values of those bits that make sense on both operating systems will be propagated to the other replica. The other bits will not be changed.
When a newly created file is propagated to a remote replica, the permission bits that make sense in both operating systems are also propagated. The values of the other bits are set to default values (they are taken from the current umask, if the receiving host is a Unix system).
For security reasons, the Unix setuid and setgid bits are not propagated.
The Unix owner and group ids are not propagated. (What would this mean, in general?) All files are created with the owner and group of the server process.

Rsnapshot without hard links?

I'm using Rsnapshot to backup all my servers on an EncFS encrypted partition. The partition has been created with the default paranoia mode offered by EncFS, thus it doesn't support hard links.
I'm able to run Rsnapshot the first time (creating daily.0, weekly.0, monthly.0) but not the second time.
Is there a way to use Rsnapshot without the hardlinking feature? I know it sounds a bit silly, but my rsnapshot.conf is very well configured and I don't want either to switch to another software or erase and recreate the EncFS volume.
Thank you
Look for this section in /etc/rsnapshot.conf file:
# If your version of rsync supports --link-dest, consider enable this.
# This is the best way to support special files (FIFOs, etc) cross-platform.
# The default is 0 (off).
#
#link_dest 0
Make sure the "link_dest" is disabled. This is used as a flag when rsync command is called in the background. As per the man page for rsync:
--link-dest=DIR hardlink to files in DIR when unchanged

Why is the minidlna database not being refreshed?

I am developing a MiniDLNA server to stream media over WiFi. Existing files are shown properly. However, when I add new files to media folders the changes are not updated across MiniDLNA clients. I have also tried to restart the server but it does not reflect the changes.
I changed inotify_interval = 60 but it's still not updating files.db which is the MiniDLNA media list database. If I delete this database and restart the server it shows the changes.
Does anyone know what the problem might be?
$ minidlnad -h
…
-r forces a rescan
-R forces a rebuild
In summary, the most reliable way to have MiniDLNA rescan all media files is by issuing the following set of commands:
$ sudo minidlnad -R
$ sudo service minidlna restart
Client-side script to rescan server
However, every so often MiniDLNA will be running on a server. Here is a client-side script to request a rescan on such a server:
#!/usr/bin/env bash
ssh -t server.on.lan 'sudo minidlnad -R && sudo service minidlna restart'
AzP already provided most of the information, but some of it is incorrect.
First of all, there is no such option inotify_interval. The only option that exists is notify_interval and has nothing to do with inotify.
So to clarify, notify_interval controls how frequently the (mini)dlna server announces itself in the network. The default value of 895 means it will announce itself about once every 15 minutes, meaning clients will need at most 15 minutes to find the server. I personally use 1-5 minutes depending on client volatility in the network.
In terms of getting minidlna to find files that have been added, there are two options:
The first is equivalent to removing the file files.db and consists in restarting minidlna while passing the -R argument, which forces a full rescan and builds the database from scratch. Since version 1.2.0 there's now also the -r argument which performs a rebuild action. This preserves any existing database and drops and adds old and new records, respectively.
The second is to rely on inotify events by setting inotify=yes and restarting minidlna. If inotify is set to =no, the only option to update the file database is the forced full rescan.
Additionally, in order to have inotify working, the file-system must support inotify events, which is not the case in most remote file-systems. If you have minidlna running over NFS it will not see any inotify events because these are generated on the server side and not on the client.
Finally, even if inotify is working and is supported by the file-system, the user under which minidlna is running must be able to read the file, otherwise it will not be able to retrieve necessary metadata. In this case, the logfile (usually /var/log/minidlna.log) should contain useful information.
MiniDLNA uses inotify, which is a functionality within the Linux kernel, used to discover changes in specific files and directories on the file system. To get it to work, you need inotify support enabled in your kernel.
The notify_interval (notice the lack of a leading 'i'), as far as I can tell, is only used if you have inotify disabled. To use the notify_interval (ie. get the server to 'poll' the file system for changes instead of automatically being notified of them), you have to disable the inotify functionality.
This is how it looks in my /etc/minidlna.conf:
# set this to no to disable inotify monitoring to automatically discover new files
# note: the default is yes
inotify=yes
Make sure that inotify is enabled in your kernel.
If it's not enabled, and you don't want to enable it, a forced rescan is the way to force MiniDLNA to re-scan the drive.
I have recently discovered that minidlna doesn't update the database if the media file is a hardlink. If you want these files to show up in the database, a full rescan is necessary.
ex: If you have a file /home/movies/foo.mkv and a hardlink in /home/minidlna/video/foo.mkv, where '/home/minidlna' is your minidlna share, you will have to do a rescan till that file appears in the db (and subsequently your dlna client).
I'm still trying to find a way around this. If anyone has any input, it's most welcome.
There is a patch for the sourcecode of minidlna at sourceforge available that does not make a full rescan, but a kind of incremental scan. That worked fine, but with some later version, the patch is broken. See here Link to SF
Regards
Gerry
I have solved it with a small script:
Every 15 seconds it checks the size of the directory (/media/seriesPI). The service is restarted if there are changes
#!/bin/bash
function sizeFiles(){
for i in $(du /media/seriesPI/ | awk '{print $1}')
do
cad+=$i
done
}
sizeFiles
#first size
first=$cad
cad=''
while [ true ]
do
sizeFiles
echo "$first != $cad"
if [ "$first" != "$cad" ] ; then
echo "Directory size has changed!"
echo "Restart service MiniDLNA"
sudo service minidlna restart
#update new size
first=$cad
else
echo "There are no changes in the directory"
fi
echo "waiting 15 seconds..."
sleep 15
cad=''
done
Resolved with crontab root
10 * * * * /usr/bin/minidlnad -r