Is there any way to schedule redis back-ups at a specific time of day (e.g. 3:00 AM GMT) - preferably via a setting in the accompanying conf file?
I already understand that one can set a backup rule in redis configuration (e.g. save every X hours if Y keys have changed).
But how does one schedule the said backup at a particular time of day? Would love to know something basic, but effective. In case it matters, my redis version is 5.0.3
So far I know it is currently not possible from inside redis. But its achievable using crontab. Here is a short example:
create a backup script file:
/tmp/backup.sh
echo save | redis-cli >> /tmp/redis-backup.log
If using sockets, the above would be:
echo save | redis-cli -s /var/run/redis.sock >> /tmp/redis-backup.log
The socket location in your system may vary.
Next, give execute permission to the script:
chmod +x /tmp/backup.sh
Finally, make an entry in crontab: crontab -e
0 3 * * * /tmp/backup.sh
This will run backup.sh in exactly 3AM.
In case you want to disable redis saving setup in the conf (without restarting the redis instance), the best way is to log into redis-cli and issue CONFIG SET save "". Double check that it worked via CONFIG GET save. Finally, don't forget to change the save settings in the relevant conf file as well. Lastly, it's wiser to use bgsave instead of save if tackling a redis instance in production.
For more, checkout these links:
How To Back Up and Restore Your Redis Data
Cron Scheduler
How To Start/Stop/Restart Cron Service In Linux
Related
I have a problem. I created a pool consisting of single volume of 1 file 2.5Tb just to fight with file duplicates. I copied a folder with photos. Some of the photos were not backed up. Just now I see my pool folder is empty. When I checked with 'sudo zfs list' it said 'No datasets available'.
I thought it was detached and to attach I started again all these commands.
sudo zpool create singlepool -f /home/john/zfsvolumes/zfs_single_volume.dat -m /home/share/zfssinglepool
sudo zfs set dedup=on singlepool
sudo zpool get dedupratio singlepool
sudo zfs set compression=lz4 singlepool
sudo chown -R writer:writer /home/share/zfssinglepool
I see now empty pool!
May I get my folders back which I copied to the pool before I started create pool again?
Unfortunately, use of zpool create -f will recreate the pool from scratch even if ZFS recognizes that a pool has already been created using that storage:
-f Forces use of vdevs, even if they appear in use or specify a
conflicting replication level. Not all devices can be over-
ridden in this manner.
This is similar to reformatting a partition with other file systems, which will leave whatever data is there written in place, but still erase the references the file system needs to find the data. You may be able to pay an expert to reconstruct your data, but otherwise I'm afraid the data will be very hard to get back from your pool. As in any data recovery mission, I'd advise making a copy of the data ASAP on some external media that you can use to do the recovery from, in case further attempts at recovery accidentally corrupt the data even worse.
I am using Redis since last 12 months without any issue, But from last 30 days unknowingly the database getting empty and we couldn't find any logs regarding this. Even it is flushing all the data out randomly after restoration.
We tried following steps to resolve this but result was zero.
We have checked redis logs
Monitored the redis using MONITOR command
We are trying to renaming the critical commands through config but redis is dump after the config change below is example command
rename-command FLUSHDB e0cc96ad2eab73c2c347011806a76b73
We gone made without knowing anything. Helps are appreciated.
Redis Version : 2.8.17
Running under Debian Linux
Renaming the command through config file will work in this case.
Same rename command you have to place inside the config file.
rename-command FLUSHDB e0cc96ad2eab73c2c347011806a76b73
How can I dump a redis that's running on database 0 and restore it in my local machine on a different database (8) ?
I already secure copied the dump file:
scp hostname#/var/lib/redis/dump.rdb .
But if I change my local redis dump.rdb with this one, I'll get the data on database 0. How can I restore it to a specific database?
Firstly note that the use of numbered/shared Redis databases is inadvisable. You really should consider using dedicated Redis servers with a single DB (0) on them (more info at: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances)
Redis does not offer a straightforward way to do this, but there are two basic ways one could go about it:
Pre-processing: modify the dump.rdb file to load into your database of choosing. You could build a tool for that or perhaps use one of the existing ones. Jan-Erik has done an outstanding job of documenting the RDB v7 format at http://rdb.fnordig.de/file_format.html so all you need to do is basically change the Database Selector byte.
Post-restore: use the MOVE command on the output of SCANing your restored database - should be easily scriptable.
I ended up creating a script in Ruby to dump and restore the keys I wanted. (Please note that this approach is slow, takes around 1 min for 200 keys) .
Get the keys to dump / restore
ssh hostname redis-cli --scan --pattern 'awesome_filter_pattern*'
Open an ssh connection to the production server
Dump the remote key
dump = ssh.exec!("redis-cli dump #{key}").chomp
Restore it on localhost
$redis.connection.restore(key, 0, dump)
I am developing a MiniDLNA server to stream media over WiFi. Existing files are shown properly. However, when I add new files to media folders the changes are not updated across MiniDLNA clients. I have also tried to restart the server but it does not reflect the changes.
I changed inotify_interval = 60 but it's still not updating files.db which is the MiniDLNA media list database. If I delete this database and restart the server it shows the changes.
Does anyone know what the problem might be?
$ minidlnad -h
…
-r forces a rescan
-R forces a rebuild
In summary, the most reliable way to have MiniDLNA rescan all media files is by issuing the following set of commands:
$ sudo minidlnad -R
$ sudo service minidlna restart
Client-side script to rescan server
However, every so often MiniDLNA will be running on a server. Here is a client-side script to request a rescan on such a server:
#!/usr/bin/env bash
ssh -t server.on.lan 'sudo minidlnad -R && sudo service minidlna restart'
AzP already provided most of the information, but some of it is incorrect.
First of all, there is no such option inotify_interval. The only option that exists is notify_interval and has nothing to do with inotify.
So to clarify, notify_interval controls how frequently the (mini)dlna server announces itself in the network. The default value of 895 means it will announce itself about once every 15 minutes, meaning clients will need at most 15 minutes to find the server. I personally use 1-5 minutes depending on client volatility in the network.
In terms of getting minidlna to find files that have been added, there are two options:
The first is equivalent to removing the file files.db and consists in restarting minidlna while passing the -R argument, which forces a full rescan and builds the database from scratch. Since version 1.2.0 there's now also the -r argument which performs a rebuild action. This preserves any existing database and drops and adds old and new records, respectively.
The second is to rely on inotify events by setting inotify=yes and restarting minidlna. If inotify is set to =no, the only option to update the file database is the forced full rescan.
Additionally, in order to have inotify working, the file-system must support inotify events, which is not the case in most remote file-systems. If you have minidlna running over NFS it will not see any inotify events because these are generated on the server side and not on the client.
Finally, even if inotify is working and is supported by the file-system, the user under which minidlna is running must be able to read the file, otherwise it will not be able to retrieve necessary metadata. In this case, the logfile (usually /var/log/minidlna.log) should contain useful information.
MiniDLNA uses inotify, which is a functionality within the Linux kernel, used to discover changes in specific files and directories on the file system. To get it to work, you need inotify support enabled in your kernel.
The notify_interval (notice the lack of a leading 'i'), as far as I can tell, is only used if you have inotify disabled. To use the notify_interval (ie. get the server to 'poll' the file system for changes instead of automatically being notified of them), you have to disable the inotify functionality.
This is how it looks in my /etc/minidlna.conf:
# set this to no to disable inotify monitoring to automatically discover new files
# note: the default is yes
inotify=yes
Make sure that inotify is enabled in your kernel.
If it's not enabled, and you don't want to enable it, a forced rescan is the way to force MiniDLNA to re-scan the drive.
I have recently discovered that minidlna doesn't update the database if the media file is a hardlink. If you want these files to show up in the database, a full rescan is necessary.
ex: If you have a file /home/movies/foo.mkv and a hardlink in /home/minidlna/video/foo.mkv, where '/home/minidlna' is your minidlna share, you will have to do a rescan till that file appears in the db (and subsequently your dlna client).
I'm still trying to find a way around this. If anyone has any input, it's most welcome.
There is a patch for the sourcecode of minidlna at sourceforge available that does not make a full rescan, but a kind of incremental scan. That worked fine, but with some later version, the patch is broken. See here Link to SF
Regards
Gerry
I have solved it with a small script:
Every 15 seconds it checks the size of the directory (/media/seriesPI). The service is restarted if there are changes
#!/bin/bash
function sizeFiles(){
for i in $(du /media/seriesPI/ | awk '{print $1}')
do
cad+=$i
done
}
sizeFiles
#first size
first=$cad
cad=''
while [ true ]
do
sizeFiles
echo "$first != $cad"
if [ "$first" != "$cad" ] ; then
echo "Directory size has changed!"
echo "Restart service MiniDLNA"
sudo service minidlna restart
#update new size
first=$cad
else
echo "There are no changes in the directory"
fi
echo "waiting 15 seconds..."
sleep 15
cad=''
done
Resolved with crontab root
10 * * * * /usr/bin/minidlnad -r
I want to switch from MySQL to MongoDB but great data losses (more than 1 hour) are not acceptable for me.
I need to have 3 backup plans:
Hourly backup plan. Data is flushed to disk every X minutes and if something wrong with the server I shall be sure that after reboot it will have all data at least for an hour ago. Can I configure it?
Daily backup plan. Data is synced to backup disk every day so even if server explodes I can recover data for yesterday in some hours. Should I use fsync, master-slave or something else? I would like to have minimal traffic so ideally only changes will be sent.
Weekly backup plan. Data is synced to second backup disk so if both server and first backup disk explode I have at least data for last week. Here this is the question of reliability so it's ok to send all data via network.
How can I do it?
The fsync command flushes the data to disk. It is executed each 60 seconds by default, but can be configured using the --syncdelay command line parameter.
The documentation on backups has some good pointers for daily and weekly backups. For the daily backup, a master-slave configuration seems like the best option, as it will only sync changes.
For the weekly backup you can also use a master-slave configuration, or replication. Another option is the mongodump utility, which will back-up the entire database. It is capable of creating backups while the database is running, so you can run it on the main database or one of the slaves. You can also lock the slave before backing it up.
Try this backup script if you want to create a backup from slave mongodb database to S3.
DB host (secondary preferred as to avoid impacting primary performance)
HOST='SomeHost/mongodbtest-slave'
DB name
DBNAME=***
S3 bucket name
BUCKET=*-backup
Linux user account
USER=ubuntu
Current time
TIME=/bin/date +%d-%m-%Y-%T
Password
PASSWORD=somePassword#!2*1
Username
USERNAME=someUsername
Backup directory
DEST=/home/ubuntu/tmp
Tar file of backup directory
TAR=$DEST/../$TIME.tar
Create backup dir (-p to avoid warning if already exists)
/bin/mkdir -p $DEST
Log
echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME";
Dump from mongodb host into backup directory
mongodump --port 27017 -d DBNAME -u USERNAME -p PASSWORD -o $DEST
Create tar of backup directory
/bin/tar cvf $TAR -C $DEST .
Upload tar to s3
/usr/bin/aws s3 cp $TAR s3://$BUCKET/
Remove tar file locally
/bin/rm -f $TAR
Remove backup directory
/bin/rm -rf $DEST
All done
echo "Backup available at https://s3.amazonaws.com/$BUCKET/$TIME.tar
You can use the steps above put them in a shell executable file and execute this at any interval using crontab commands.
If you want to outsource the backup solution entirely, MongoDB Management Service takes snapshots every six hours. The default retention policy on the snapshots will allow you to get point-in-time restore for 24 hours, daily snapshots for a week, weekly snapshots for a month, and monthly snapshots for a year.
This FAQ has the full retention policy.
The backup service continually backs up your replica set by reading the oplog so the overhead is lower than full local periodic snapshots.
May be you can use automongobackup .
On the first point.
MongoDB has a term like 'durable write operation'. If journaling is enabled, you may only lose data that has not been writed in the log. This is a very small amount of time (100 milliseconds by default)
On the second and third points.
You can set master slave replication, but this will not protect you from data errors (for example, if important data is accidentally deleted). Therefore, you need to provide regular backups one way or another.
There are several approaches here:
LVM snapshot is a good solution if your filesystem supports it and the journaling is enabled in your mongoDB. For more details see here
mongodump -You can create a shell script that will run scheduled backups via cron and send them to storage. Here is an example of a good script
Backup as a Service. There are many solutions to make the backup for you.