Rsnapshot without hard links? - backup

I'm using Rsnapshot to backup all my servers on an EncFS encrypted partition. The partition has been created with the default paranoia mode offered by EncFS, thus it doesn't support hard links.
I'm able to run Rsnapshot the first time (creating daily.0, weekly.0, monthly.0) but not the second time.
Is there a way to use Rsnapshot without the hardlinking feature? I know it sounds a bit silly, but my rsnapshot.conf is very well configured and I don't want either to switch to another software or erase and recreate the EncFS volume.
Thank you

Look for this section in /etc/rsnapshot.conf file:
# If your version of rsync supports --link-dest, consider enable this.
# This is the best way to support special files (FIFOs, etc) cross-platform.
# The default is 0 (off).
#
#link_dest 0
Make sure the "link_dest" is disabled. This is used as a flag when rsync command is called in the background. As per the man page for rsync:
--link-dest=DIR hardlink to files in DIR when unchanged

Related

S3Fs directory listings slow - caching somehow possible?

I'm wondering if there is any way to practically speedup directory listings of a s3fs mount? I have a WebDAV server, only for read operations, that basically access my s3fs mount. The problem is that listing directories is slow, while transfer speed is fine.
So I started to look a bit around the web a stumbled across "JuiceFS", sadly this was also not an option for several reasons. Then I tried "vmtouch" to index the mounted s3 storage to local memory, this is also not working as it's a shared resourced managed by the fuse kernel extension.
Even using S3FS built-in cache does not solve the issue, instead it makes it even worse as the file first getting downloaded from s3 into the cache locally and then served via WebDav ...
Is there no way to just speedup directory listing using S3? Basically, this is all I need in the end and no fancy POSIX compatible Block Device like JuiceFS which basically creates its own logic on top of your s3 bucket ... Not what I was searching for.
Unfortunately s3fs 1.91 has poor readdir performance. There are a few open issues and pull requests that track future improvements:
Option to not use head requests
Consider changing -o notsup_compat_dir default
Consider changing -o noobj_cache default
Increase -o multireq_max
Issue parallel requests in get_object_attribute
You can toggle #2-4 via command-line flags today but #5 is still in-progress. #1 is the big win that would give a 100x speedup but trades off less POSIX compatibility, e.g., no UID/GID, no permissions. One alternative that you can try today is goofys which implements #1.

Host Disk Usage: Warning message regarding disk usage

I've downloaded version HDF_3.0.2.0_vmware of the Hortonworks Sandbox. I am using VMWare Player version 6.0.7 on my laptop. Shortly after startup/logging into Ambari, I see this alert:
The message that is cut off reads: "Capacity Used: [60.11%, 32.3 GB], Capacity Total: [53.7 GB], path=/usr/hdp". I'd hoped that I would be able to focus on NiFi/Storm development rather than administering the sandbox itself, however it looks like the VM is undersized. Here are the VM settings I have for storage. How do I go about correcting the underlying issue prompting the alert?
I had similar issue, it's about node partitioning and directories mounted for data under HDFS -> Configs -> Settings -> DataNode
You can check your node partitioning using below command-
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
Mostly hdfs namenode or datanode directories point to root partitions. We can change thresholds values for alerts temporary and to have permanent solution we can add additional data directories.
Below links can he helpful to do the same.
https://community.hortonworks.com/questions/21212/configure-storage-capacity-of-hadoop-cluster.html
Check from above link - I think your partitioning is wrong you are not using "/" for hdfs directory. If you want use full disk capacity, you can create any folder name under "/" example /data/1 on every data node using command "#mkdir -p /data/1" and add to it dfs.datanode.data.dir. restart the hdfs service.
https://hadooptips.wordpress.com/2015/10/16/fixing-ambari-agent-disk-usage-alert-critical/
https://community.hortonworks.com/questions/21687/how-to-increase-the-capacity-of-hdfs.html
I am not currently able to replicate this, but based on the screenshots the warning is just that there is less space available than recommended. If this is the case everything should still work.
Given that this is a sandbox that should never be used for production, feel free to ignore the warning.
If you want to get rid fo the warning sign, it may be possible to do a quick fix by changing the warning treshold via the alert definition.
If this is still not sufficient, or you want to leverage more storage, please follow the steps outlined by #manohar

Lsync not processing files in the correct order in a master-master/bi-directional setup

I'm using LSync to synchronize the web root for two separate CentOS 7 servers running Apache. So far, it seems to be running decently, but every so often, I notice that lsync doesn't process the files properly.
An example of the issue I'm having - If I have a file called hello-world.txt on server01 and server02, and I delete it from server01, when lsync runs, instead of deleting it from server02, it actually re-creates it on server01.
I haven't found anything online about this, and I'm new to using lsync, so I'm not quite sure how to go about fixing this.
Not sure it's needed, but here's the lsync configuration file (/etc/lsyncd.conf):
settings {
logfile = "/var/log/lsyncd.log",
statusFile = "/var/log/lsyncd.stat",
statusInterval = 2
}
sync {
default.rsync,
source="/var/www/",
target="192.168.1.36:/var/www/",
rsync={
rsh = "/usr/bin/ssh -l lsync -i /etc/lsync/.ssh/id_rsa",
}
}
Any help would be appreciate! thanks!
lsyncd does not do bidirectional synchronization; its purpose is to make that directory look like this directory, continuously.
You could achieve the same effect by running rsync as a cron job. The only difference is that lsyncd is more responsive when files are changed, and more efficient when files are idle.
With extreme care, you could set up lsyncd on both servers, syncing in both directions, and then you might get what you want, but that relies on updates being not too rapid (when server01 sends an update to server02, server02 will detect that change and attempt to send it right back to server01, which is harmless as long as that file has not changed again). I'd not recommend this setup; use SyncThing, or only make edits to the "master" server.

Why is the minidlna database not being refreshed?

I am developing a MiniDLNA server to stream media over WiFi. Existing files are shown properly. However, when I add new files to media folders the changes are not updated across MiniDLNA clients. I have also tried to restart the server but it does not reflect the changes.
I changed inotify_interval = 60 but it's still not updating files.db which is the MiniDLNA media list database. If I delete this database and restart the server it shows the changes.
Does anyone know what the problem might be?
$ minidlnad -h
…
-r forces a rescan
-R forces a rebuild
In summary, the most reliable way to have MiniDLNA rescan all media files is by issuing the following set of commands:
$ sudo minidlnad -R
$ sudo service minidlna restart
Client-side script to rescan server
However, every so often MiniDLNA will be running on a server. Here is a client-side script to request a rescan on such a server:
#!/usr/bin/env bash
ssh -t server.on.lan 'sudo minidlnad -R && sudo service minidlna restart'
AzP already provided most of the information, but some of it is incorrect.
First of all, there is no such option inotify_interval. The only option that exists is notify_interval and has nothing to do with inotify.
So to clarify, notify_interval controls how frequently the (mini)dlna server announces itself in the network. The default value of 895 means it will announce itself about once every 15 minutes, meaning clients will need at most 15 minutes to find the server. I personally use 1-5 minutes depending on client volatility in the network.
In terms of getting minidlna to find files that have been added, there are two options:
The first is equivalent to removing the file files.db and consists in restarting minidlna while passing the -R argument, which forces a full rescan and builds the database from scratch. Since version 1.2.0 there's now also the -r argument which performs a rebuild action. This preserves any existing database and drops and adds old and new records, respectively.
The second is to rely on inotify events by setting inotify=yes and restarting minidlna. If inotify is set to =no, the only option to update the file database is the forced full rescan.
Additionally, in order to have inotify working, the file-system must support inotify events, which is not the case in most remote file-systems. If you have minidlna running over NFS it will not see any inotify events because these are generated on the server side and not on the client.
Finally, even if inotify is working and is supported by the file-system, the user under which minidlna is running must be able to read the file, otherwise it will not be able to retrieve necessary metadata. In this case, the logfile (usually /var/log/minidlna.log) should contain useful information.
MiniDLNA uses inotify, which is a functionality within the Linux kernel, used to discover changes in specific files and directories on the file system. To get it to work, you need inotify support enabled in your kernel.
The notify_interval (notice the lack of a leading 'i'), as far as I can tell, is only used if you have inotify disabled. To use the notify_interval (ie. get the server to 'poll' the file system for changes instead of automatically being notified of them), you have to disable the inotify functionality.
This is how it looks in my /etc/minidlna.conf:
# set this to no to disable inotify monitoring to automatically discover new files
# note: the default is yes
inotify=yes
Make sure that inotify is enabled in your kernel.
If it's not enabled, and you don't want to enable it, a forced rescan is the way to force MiniDLNA to re-scan the drive.
I have recently discovered that minidlna doesn't update the database if the media file is a hardlink. If you want these files to show up in the database, a full rescan is necessary.
ex: If you have a file /home/movies/foo.mkv and a hardlink in /home/minidlna/video/foo.mkv, where '/home/minidlna' is your minidlna share, you will have to do a rescan till that file appears in the db (and subsequently your dlna client).
I'm still trying to find a way around this. If anyone has any input, it's most welcome.
There is a patch for the sourcecode of minidlna at sourceforge available that does not make a full rescan, but a kind of incremental scan. That worked fine, but with some later version, the patch is broken. See here Link to SF
Regards
Gerry
I have solved it with a small script:
Every 15 seconds it checks the size of the directory (/media/seriesPI). The service is restarted if there are changes
#!/bin/bash
function sizeFiles(){
for i in $(du /media/seriesPI/ | awk '{print $1}')
do
cad+=$i
done
}
sizeFiles
#first size
first=$cad
cad=''
while [ true ]
do
sizeFiles
echo "$first != $cad"
if [ "$first" != "$cad" ] ; then
echo "Directory size has changed!"
echo "Restart service MiniDLNA"
sudo service minidlna restart
#update new size
first=$cad
else
echo "There are no changes in the directory"
fi
echo "waiting 15 seconds..."
sleep 15
cad=''
done
Resolved with crontab root
10 * * * * /usr/bin/minidlnad -r

Joomla 1.5 Site Backup Strategy

I would like to make a complete backup of my whole joomla 1.5 based site from time to time. How would this ideally be done? Are there any common pitfalls? Not that I only have ftp access to the hosting server. Is there a step by step tutorial somewhere? I am using latest Joomgallery and Kunena 1.0.9 (Legacy mode).
Maybe there is a good way to automate this?
There's two parts of the backup you have to worry about, the database and the files.
The first part is the database. It can be backed up using something like phpMyAdmin. If you don't have this available on your server already, it's not too hard to upload and get it going yourself. From there, you can just Export the entire database to a gzip file.
The second part is the code and uploaded files. The code base shouldn't change too often, so you could probably just make one backup of this. There's a number of ways. The simplest is to just download the entire folder via FTP, though if you're Linux, I'm sure someone will know a single command line to get all the changed files (rsync?).
The database is the main thing you have to worry about though: everything else should be able to be rebuilt just by reinstalling.
I think this: http://www.joomlapack.net/ is what you need. I use it myself and it works like a charm. Both for backups and for moving my Joomla installations from developer sites and to the real site.
get an FTP synchronisation tool and keep an up-to-date copy of your site locally. Then you could run the batch script
mysqldump -hhost -uuser -p%1 schema > C:\backup.sql
to create a backup of your mysql tables at various points in time.
edit
you would have to have MySQL Server installed on your local machine and path to its bin directory in you PATH, in order to run the mysqldump command without much hassle. -p%1 would take the command-line provided password, as you wouldn't want to store passwords in your batch script.
If you only have FTP access you are in a bit of a problem, as beside all files you'll also have to backup the database. Without accessing the database, a full-backup won't do you any good.
Whatever backup strategy you choose - be sure it can handle UTF-8 correctly. Joomla 1.5 stores all content with UTF-8, even when the database charset is set on 'iso-5589-1' - so when the backup solution is detecting the database charset, some characters like € or é will result in "strange" ¬ / é - not really what you'll want.
I absolutely endorse using Joomlapack - it works great. The optional remote tools allow you to initiate the backup from a Windows desktop machine - it performs the backup and downloads it. The remote has a scheduler, and you can also set it off to backup and download a list of sites.
Joomlapack also provides a file "kickstart.php" which you copy to your empty server account along with the backup, which automates the restore procedure. You do have to create an empty database with PHPMyAdmin or similar, and you are given the opportunity to supply the database parameters (host, database, username, password) during the process.
One pitfall I did run into with this though is that some common components can have absolute URLs in their configuration - e.g. SOBI2, Virtuemart. It's then just a matter of finding the appropriate configuration file, editing it and re-uploading it.
Another problem was one archive file (either ZIP or their JPA format) got a filename with a "?" character in it (from a Linux server) and this caused a bit of a problem trying to install it locally on a Windows WAMP stack - the extract process on the ZIP file failed, and it stopped the process completing cleanly.
I suggest using automatic backup service by http://www.everlive.net
Update:
Ok, here is some more information. EverLive.net is a website where you can create a free account. Enter your website details and you are ready to take your backups withe just one click. Restore is also possible in the same way.
Further you can use automatic backup option to take automatic backups at defined intervals. Other than that, you can use the website health check service to inform you if your website is not available.