Unison sync: recovering copy of replaced files - unison

I have configured Unison for synchronising files among servers. It takes a copy of a file from SERVER1 and replaces or copies the file to the other servers. I just added a folder directly onto SERVER2, but SERVER1 (the base server) had an older copy of that folder and its contents. After using Unison to synchronise all of my files to SERVER2, that folder was replaced by the older folder from SERVER1.
Is there any way to recover files from SERVER2? Does Unison itself maintain some version control or backups?

Depending on your configuration, Unison should register this as a conflict where you would manually need to tell it to push the files from SERVER1 to SERVER2. Unison does not maintain backups by default, so unless you have enabled this, the files on SERVER2 have been overwritten.
To enable backups in Unison, you need to have something like this in your Unison profile:
backuplocation = central
backupdir = Unison-Backups
backup = Name {.*,*}
maxbackups = 7
backupprefix =
backupsuffix = .$VERSION
This will make backups of up to 7 versions of every file and put those backups in the Unison-Backups directory and will append the version number to the name of each backed-up file. See this section of the Unison manual for more details.

Related

How to restore Redis db?

I am following the documents about how to restore Redis and I am at complete loss at this point.
The document says
127.0.0.1:6379> SAVE
OK
This command will create the dump.rdb file in your redis directory.
Which it does, it creates the exact same file for me in /usr/lib/redis which is alright I guess.
To restore redis data just move redis backup file (dump.rdb) into your redis directory and start the server. To get your redis directory use CONFIG command can be used. The CONFIG GET command is used to read the configuration parameters of a running Redis server.
127.0.0.1:6379> CONFIG get dir
1) "dir"
2) "/var/lib/redis/6379"
Here is where it makes no sense to me. The .rdb file for me is already saved in /var/lib/redis/ and I have no sub folder to that. I don't understand what "dir" is doing there and how I can restore my database.
Please enlighten me. I don't seem to be able to save it or I cannot find it perhaps.
Okay so basically, the rdb file saved in /var/lib/redis/ is saved every time the server stops and this can be moved to another folder and be used as a point for restoring every time redis service starts.

Where does dump.rdb belong?

I remember playing around with some settings and I believe it changed the location of dump.rdb. Now, dump.rdb auto-magically appears at the root of my projects.
Where does it belong, and how would I return it there? Also, how does this location change when in a production environment?
Where does it belong?
Wherever you want.
The default directory is ./, meaning the directory where the Redis server got started from.
Edit:
* I am modifying your second question (asked in comment) a little bit.
Is it possible to change to location of dump.rdb? How?
Yes, it is possible. There two possible ways I can think of.
1.
Modify redis configuration file (e.g. redis.conf) and restart redis server. This way, every restart after this one will use the new directory. But redis will not reload any previous data at first restart (because there will not be anything to reload from).
To reload previous data, previous dump.rdb would have to be moved to new location manually before restarting the server.
2.
Set new directory by CONFIG SET command. E.g.
CONFIG SET dir path/to/new/directory
* Note that the path has to be a directory.
That's it! But this way is not permanent because server restart will use the old directory.
To make new directory permanent, execute CONFIG REWRITE to rewrite the configuration file. Remember, redis server has to have write permission to that file.
dir path/to/dorectory has to be set in the redis config file.

Smart local copy of a remote directory

Currently I have a bunch of local copies of dev/production websites. Each copy contains the "files" directory, which contains files uploaded by site users. Currently I use rsync to synchronize the directories contents from remote servers (via ssh).
There are some annoyances:
I have to run rsync manually each time when I want fresh files (this could be automated of course, but as I have a lot of website copies, it's not a good idea).
The rsync execution takes some time.
Disc space on my laptop is running out.
I think all of this could be solved if there is some kind of a software that can work like a proxy:
When I list files, it requests the file list from the remote server and caches the results for some (configurable) time.
When I first time request file contents, it retrieves the remote file and saves it locally.
When I update a file, it only gets updated locally.
When I save a new file in the "files" directory, it not goes to the remote server.
Of course, the logic of such software should be much more complex, but I hope, my idea is clear: don't waste disk space, download files on demand, no remote changes.
Is there any software that works like that?
Map a network drive with NFS or sshfs. Make local copies if you really need a file.
I did not mention it in the question, but I needed this for work with Drupal. And now I have found a Drupal-only solution, the Stage File Proxy module.
It does exactly what I need: downloads files from a remote server only when they are requested.

using multiple redis ports or instances with separate backup files

I'm making new instances of redis by using different ports (Multiple Redis Instances)
How do I control where each port's dump.rdb backup file saves to?
Are both instances going to save to the same file? can I make separate backup files for each instance?
Thanks
You can have different configurations for each instance and use them like this:
$ redis-server /path/to/redis1.conf
Each configuration defines the filename for the dump file in the following setting:
# The filename where to dump the DB
dbfilename dump1.rdb
You can use the install_server.sh script from the Redis git repository to easily setup multiple servers with their own configurations (port, rdb directory etc...)

Use rsync without copying files that are in use

I have a server (Machine A) that receives uploads throughout the day from other machines. I have a script running on another internal server (running as cron - Machine B) that uses rsync to pull these files onto itself and remove the originals on Machine A. Some of these uploads last an hour or more.
How do I use rsync so that it won't attempt to copy files that are currently uploaded (being written to)? I don't want it to pull partial uploads and then attempt to process them.
I'm using Ubuntu 10.04 64-bit on both machine A & B.
In order to make incremental baclups in rsync, you should put the --update or -u option. The only situation in which the a file existing in the receiver will be updated is when the archive exists and has the same timestamp in both ends but the size differs.
About the partial updates, all the temporary uploads are stored in a temporary archive and then moved to the dest directory when uploaded. you can use the --partial in case of a rsync or network problem, this will resume the partial updates next time you execute the sync again.
You can check the whole options from this man page.