I have a problem. I created a pool consisting of single volume of 1 file 2.5Tb just to fight with file duplicates. I copied a folder with photos. Some of the photos were not backed up. Just now I see my pool folder is empty. When I checked with 'sudo zfs list' it said 'No datasets available'.
I thought it was detached and to attach I started again all these commands.
sudo zpool create singlepool -f /home/john/zfsvolumes/zfs_single_volume.dat -m /home/share/zfssinglepool
sudo zfs set dedup=on singlepool
sudo zpool get dedupratio singlepool
sudo zfs set compression=lz4 singlepool
sudo chown -R writer:writer /home/share/zfssinglepool
I see now empty pool!
May I get my folders back which I copied to the pool before I started create pool again?
Unfortunately, use of zpool create -f will recreate the pool from scratch even if ZFS recognizes that a pool has already been created using that storage:
-f Forces use of vdevs, even if they appear in use or specify a
conflicting replication level. Not all devices can be over-
ridden in this manner.
This is similar to reformatting a partition with other file systems, which will leave whatever data is there written in place, but still erase the references the file system needs to find the data. You may be able to pay an expert to reconstruct your data, but otherwise I'm afraid the data will be very hard to get back from your pool. As in any data recovery mission, I'd advise making a copy of the data ASAP on some external media that you can use to do the recovery from, in case further attempts at recovery accidentally corrupt the data even worse.
Setup:
1.OpenERP/Odoo installed in a Docker environment as a single file. In other words, OpenERP/Odoo and a PostgreSQL database are installed by running a single "run" command.
NGINX used as a reverse proxy
Restore database over 1Mb in size.
Reference:
Error message in restoring database via both zip file and dump file for Odoo 8
Symptoms:
OpenERP/Odoo starts to upload database but then states that database cannot be restored while at the same time advising that the database has been restored.
Database is not available at the central OpenERP/Odoo log-in screen.
For newbies like myself the experience of this problem was particularly frustrating. The problem stems from a default setting within NGINX that limits NGINX interaction with a client (the computer used to restore the database to OpenERP/Odoo) to a 1Mb upload. As a result, the database restoration feature of OpenERP/Odoo appears broken. Thankfully, the reference in the question above hinted at the problem and the solution. Included below is a more richly documented instruction set on correcting the NGINX configuration that prevents Openerp/Odoo database restoration.
Attach to Docker container
$ docker exec -it [containerIdOrName] bash
If this is the first time trying to modify NGINX install VI
$ apt-get update
$ apt-get install vim
Set client_max_body_size to 0 to disable body size checking
See Module ngx_http_core_module for more information on settings
$ vi /etc/nginx/nginx.conf
http{ ...
client_max_body_size 0;
}
Exit out of NGINX container
$ exit
Restart NGINX container
$ docker restart [containerIdOrName]
Give database restoration a try.
Please post corrections or additions to this method to sweeten the process up for others that are struggling bushwhacking their way through virtualization.
I'm using LSync to synchronize the web root for two separate CentOS 7 servers running Apache. So far, it seems to be running decently, but every so often, I notice that lsync doesn't process the files properly.
An example of the issue I'm having - If I have a file called hello-world.txt on server01 and server02, and I delete it from server01, when lsync runs, instead of deleting it from server02, it actually re-creates it on server01.
I haven't found anything online about this, and I'm new to using lsync, so I'm not quite sure how to go about fixing this.
Not sure it's needed, but here's the lsync configuration file (/etc/lsyncd.conf):
settings {
logfile = "/var/log/lsyncd.log",
statusFile = "/var/log/lsyncd.stat",
statusInterval = 2
}
sync {
default.rsync,
source="/var/www/",
target="192.168.1.36:/var/www/",
rsync={
rsh = "/usr/bin/ssh -l lsync -i /etc/lsync/.ssh/id_rsa",
}
}
Any help would be appreciate! thanks!
lsyncd does not do bidirectional synchronization; its purpose is to make that directory look like this directory, continuously.
You could achieve the same effect by running rsync as a cron job. The only difference is that lsyncd is more responsive when files are changed, and more efficient when files are idle.
With extreme care, you could set up lsyncd on both servers, syncing in both directions, and then you might get what you want, but that relies on updates being not too rapid (when server01 sends an update to server02, server02 will detect that change and attempt to send it right back to server01, which is harmless as long as that file has not changed again). I'd not recommend this setup; use SyncThing, or only make edits to the "master" server.
I'm using Rsnapshot to backup all my servers on an EncFS encrypted partition. The partition has been created with the default paranoia mode offered by EncFS, thus it doesn't support hard links.
I'm able to run Rsnapshot the first time (creating daily.0, weekly.0, monthly.0) but not the second time.
Is there a way to use Rsnapshot without the hardlinking feature? I know it sounds a bit silly, but my rsnapshot.conf is very well configured and I don't want either to switch to another software or erase and recreate the EncFS volume.
Thank you
Look for this section in /etc/rsnapshot.conf file:
# If your version of rsync supports --link-dest, consider enable this.
# This is the best way to support special files (FIFOs, etc) cross-platform.
# The default is 0 (off).
#
#link_dest 0
Make sure the "link_dest" is disabled. This is used as a flag when rsync command is called in the background. As per the man page for rsync:
--link-dest=DIR hardlink to files in DIR when unchanged
I've recently created a dropbox system using inotify, watching for files created in a particular directory. The directory I'm watching is mounted from an NFS server, and inotify is behaving differently than I'd expect. Consider the following scenario in which an inotify script is run on machine A, watching /some/nfs/dir/also/visible/to/B.
-Using machine A to create a file in /some/nfs/dir/also/visible/to/B, the script behaves as expected. Using machine B to carry out the same action, the script is not notified about a new file dropped in the directory.
-When the script is run on the NFS server, it gets notified when files are created from both machine A and machine B.
Is this a bug in the bug in the package I'm using to access inotofy, or is this expected behaviour?
inotify requires support from the kernel to work. When an application tracks a directory, it asks the kernel to inform it when those changes occur. When the change occurs, in addition to writing those changes to disk, the kernel also notifies the watching process.
On a remote NFS machine, the change is not visible to the kernel; it happens entirely remotely. NFS predates inotify and there is no network level support for it in NFS, or anything equivalent.
If you want to get around this, You can run a service on the storage server (since that kernel will always see changes to the filesystem) that brokers inotify requests for remote machines, and forward the data to the remote clients.
Edit: It seems odd to me that NFS should be blamed for its lack of support for inotify.
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984, wikipedia article
However:
Inotify (inode notify) is a Linux kernel subsystem that acts to extend filesystems to notice changes to the filesystem. [...] It has been included in the mainline Linux kernel from release 2.6.13 (June 18, 2005 ) [...]. wikipedia article
It's hard to expect a portable network protocol/application to support a specific kernel feature developed for a different operating system and that appeared more than twenty years later. Even if it did include extensions for it, they would not be available or useful on other operating systems.
*emphasis mine in all cases
Another problem with this; Lets suppose we are not using a network at all, but rather, a local filesystem with good inotify support: ext3 (suppose its mounted at /mnt/foo). But instead of a real disk, the filesystem is mounted from a loopback device ; and the underlying file is in turn accessible at a different location in the vfs (say, /var/images/foo.img).
Now, you're not supposed to modify mounted ext3 filesystems, But it's still reasonably safe to do so if the change is to file contents instead of metadata.
So suppose a clever user modifies the file system image (/var/images/foo.img) in a hex editor, replacing a file's contents with some other data, while at the same time an inotify watch is observing the same file on the mounted filesystem.
There's no reasonable way one can arrange for inotify to always inform the watching process of this sort of change. Although there are probably some gyrations that could be take to make ext3 notice and honor the change, none of that would apply to, say, the xfs drtiver, which is otherwise quite similar.
Nor should it. You're cheating!. inotify can only inform you of changes that occured through the vfs at the actual mountpoint being watched. If the changes occured outside that VFS, because of a change to the underlying data, inotify can't help you and isn't designed to solve that problem.
Have you considered using a message queue for network notification?
To anyone who has come across this question in the search for an answer of why bind mounting on Docker will not detect file changes from host directory (for hot reloading of an app), it's because the propagation of file changes between host and container is not communicated to the container kernel.
Only changes from the container itself is communicated to the kernel. Solution for this is to have your live reload utility turn on "polling mode" instead of using fsnotify.
I found an SGI FAM using an supervisor daemon to monitor file modification. It supports NFS and you can see some description on wiki
I agree with SingleNegationElimination's explanation, and would like to add that iSCSI targets will work, since they alert the kernel.
So things on "real" file systems (relative to the system, that is) will trigger Inotify to alert. Like Rsync'ing, net-catting something into a mounted partition.
If you have to get notifications via inotify (or have to use inotify) you can make a cron to rsync -avz over to the file system. Drawbacks of course are that you are using real system hdd space.
I second #SingleNegationElimination.
Also, you can try notify-forwarder.
Machine A watches for local inotify events, then forwards them to Machine B (via UDP).
Machine B doesn't (can't?) replay the events, but fires an ATTRIB event for the changed file.
If you use vagrant, use vagrant-notify-forwarder.
the problem with notify-forwarder is that it does not trigger an inotify event. It uses utime to update the timestamp for the file on the remote system but inotify fails to see this.
AFAIK, the timestamp already gets updated when using an NFS mount. I have verified this myself between a Synology NAS NFS server and a Raspbian NFS mount (client).
Here's my solution / hack on the client:
#!/bin/bash
path=$1
firstmd5=`ls -laR $path | md5sum | awk ' { print $1 }'`
while true
do
lastmd5=`ls -laR $path | md5sum | awk ' { print $1 }'`
if [ $firstmd5 != $lastmd5 ]
then
firstmd5=$lastmd5
echo files changed
fi
sleep 1
done
Granted, this doesn't report on the specific file being changed, but does provide a general notification hook that something's changed.
It's annoying / kludgy but if I needed more details I would do some additional hacking to isolate the actual files changed.
improved the script with action on click and icon
#!/bin/bash
DAT=$(date +%Y%m%d)
CAM="cam1 "
CHEMIN=/mnt/cams/cam1/$DAT/
first="$CHEMIN"
if [ -d "$CHEMIN" ];then
first=`ls -1rt $CHEMIN | tail -n 1`
fi
echo $first
while true
do
if [ -d "$CHEMIN" ];then
last=`ls -1rt $CHEMIN | tail -n 1`
if [ $first != $last ]
then
first=$last
echo $last created
#notify-send -h string:desktop-entry:nautilus -c "transfer.complete" -u critical -i $PWD../QtVsPlayer.png $CAM $last"\n\r"$CHEMIN
reply=$(dunstify -a QtVsPlayer -A 'open,ouvrir' -i "QtVsPlayer" "$CAM $last"\n\r"$CHEMIN")
if [[ "$reply" == "open" ]]; then
QtVsPlayer -s $CHEMIN$last
fi
fi
fi
sleep 5m
done