RSync single (archive) file that changes every time - backup

I am working on an open source backup utility that backs up files and transfers them to various external locations such as Amazon S3, Rackspace Cloud Files, Dropbox, and remote servers through FTP/SFTP/SCP protocols.
Now, I have received a feature request for doing incremental backups (in case the backups that are made are large and become expensive to transfer and store). I have been looking around and someone mentioned the rsync utility. I performed some tests with this but am unsure whether this is suitable, so would like to hear from anyone that has some experience with rsync.
Let me give you a quick rundown of what happens when a backup is made. Basically it'll start dumping databases such as MySQL, PostgreSQL, MongoDB, Redis. It might take a few regular files (like images) from the file system. Once everything is in place, it'll bundle it all in a single .tar (additionally it'll compress and encrypt it using gzip and openssl).
Once that's all done, we have a single file that looks like this:
mybackup.tar.gz.enc
Now I want to transfer this file to a remote location. The goal is to reduce the bandwidth and storage cost. So let's assume this little backup package is about 1GB in size. So we use rsync to transfer this to a remote location and remove the file backup locally. Tomorrow a new backup file will be generated, and it turns out that a lot more data has been added in the past 24 hours, and we build a new mybackup.tar.gz.enc file and it looks like we're up to 1.2GB in size.
Now, my question is: Is it possible to transfer just the 200MB that got added in the past 24 hours? I tried the following command:
rsync -vhP --append mybackup.tar.gz.enc backups/mybackup.tar.gz.enc
The result:
mybackup.tar.gz.enc 1.20G 100% 36.69MB/s 0:00:46 (xfer#1, to-check=0/1)
sent 200.01M bytes
received 849.40K bytes
8.14M bytes/sec
total size is 1.20G
speedup is 2.01
Looking at the sent 200.01M bytes I'd say the "appending" of the data worked properly. What I'm wondering now is whether it transferred the whole 1.2GB in order to figure out how much and what to append to the existing backup, or did it really only transfer the 200MB? Because if it transferred the whole 1.2GB then I don't see how it's much different from using the scp utility on single large files.
Also, if what I'm trying to accomplish is at all possible, what flags do you recommend? If it's not possible with rsync, is there any utility you can recommend to use instead?
Any feedback is much appreciated!

The nature of gzip is such that small changes in the source file can result in very large changes to the resultant compressed file - gzip will make its own decisions each time about the best way to compress the data that you give it.
Some versions of gzip have the --rsyncable switch which sets the block size that gzip works at to the same as rsync's, which results in a slightly less efficient compression (in most cases) but limits the changes to the output file to the same area of the output file as the changes in the source file.
If that's not available to you, then it's typically best to rsync the uncompressed file (using rsync's own compression if bandwidth is a consideration) and compress at the end (if disk space is a consideration). Obviously this depends on the specifics of your use case.

It sent only what it says it sent - only transferring the changed parts is one of the major features of rsync. It uses some rather clever checksumming algorithms (and it sends those checksums over the network, but this is negligible - several orders of magnitude less data than transferring the file itself; in your case, I'd assume that's the .01 in 200.01M) and only transfers those parts it needs.
Note also that there already are quite powerful backup tools based on rsync - namely, Duplicity. Depending on the license of your code, it may be worthwhile to see how they do this.

New rsync --append WILL BREAK your file contents, if there are any changes in your existing data. (Since 3.0.0)

Related

Syncing large amount of files across multiple machines in a scalable way

I'm looking for a way to sync a large number of machines (hundreds) with a remote repository.
The repository is comprised of small files (around 20KB), but the total arrives at a few GB and continue to grow with time.
The goal is to have changes at the remote repository propagate as fast as possible (no more than 2 seconds) to all the machines. (sync)
There are tools that provide exactly this functionality such as S3 sync or Rclone but carry a major disadvantage:
The sync command will need to enumerate all of the files in the bucket to determine whether a local file already exists in the bucket and if it is the same as the local file. The more documents you have in the bucket, the longer it's going to take. This means that once the bucket gets big even a small change will cost a lot of time.
I wonder if there is a way (a tool or a method) to sync only modified files, without having to go through all of the files. You can imagine a comparison of meta data at source and remote, determining what are the diffs and acting accordingly.
How would you go about it?

Archiving, compressing and copying large number of files over a network

I am trying to copy files exceeding 500000 (around 1 TB) , through ssh however the pipe fails as I have exceeded the time limit for the ssh into the remote computer,
so I moved on to archiving and compressing (using tar and gzip) all the files on the remote computer, however even if i leave the process in the background, since I exceed the time for ssh'ing' into the remote computer the process is cancelled.
finally, I moved on to compressing the files one by one and then tarring ( based on a suggestion that archiving consumes a lot of time for large number of files ) however, I get error that argument list is too long.
Since all these files are spread in 20 such folders, I do not want to enter each and divide into further folders and archiving & compressing it.
Any suggestions would be really helpful.
Definitely tar and gz either the whole thing or the 20 directories individually (I would do the latter to divide and conquer at least a little.) That reduces overall transfer time and provides a good error check on the other end.
Use rsync through ssh to do the transfer. If it gets hosed in the middle, use rsync --append to pick up where you left off.

Will doing fork multiple times affect performance?

I need to read log files (.CSV) using fastercsv and save the contents of it in a db (each cell value is a record). The thing is there are around 20-25 log files which has to be read daily and those log files are really large (each CSV file is more then 7Mb). I had forked the reading process so that user need not have to wait a long time but still reading 20-25 files of that size is taking time (more then 2hrs). Now I want to fork reading of each file i.e there will be around 20-25 child process getting created, my question is can I do that? If yes will it affect the performance and is fastercsv able to handle this?
ex:
for report in #reports
pid = fork {
.
.
.
}
Process.dispatch(pid)
end
PS:I'm using rails 3.0.7 and Its going to happen in server which is running in amazon's large instance(7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of local instance storage, 64-bit platform)
If the storage is all local (and I'm not sure you can really say that if you're in the cloud), then forking isn't likely to provide a speedup because the slowest part of the operation is going to be disc I/O (unless you're doing serious computation on your data). Hitting the disc via several processes isn't going to speed that up at once, though I suppose if the disc had a big cache it might help a bit.
Also, 7MB of CSV data isn't really that much - you might get a better speedup if you found a quicker way to insert the data. Some databases provide a bulk load function, where you can load in formatted data directly, or you could turn each row into an INSERT and file that straight into the database. I don't know how you're doing it at the moment so these are just guesses.
Of course, having said all that, the only way to be sure is to try it!

What are the disadvantages of storing images on a file system?

I have a few questions about storing files on the operating system. These may or may not be valid worries, but I don't want to go on without knowing.
What will happen when the file it is stored in get a very large amount of data (1 Million images of up to 2MB each): Will this effect RAM and make the OS go slow?
What security risks does it open as far as Viruses?
Would scalability just be transfering files from that machine to a new machine?
The only problem will be if you try to store all of those images in a single directory.
Serving static files, you are liable to hit limits of the network before you hit the machine's limit.
In terms of security, you want to make sure that only images are uploaded, and not arbitrary files - check more than the file extension or mime-type!

Moving 1 million image files to Amazon S3

I run an image sharing website that has over 1 million images (~150GB). I'm currently storing these on a hard drive in my dedicated server, but I'm quickly running out of space, so I'd like to move them to Amazon S3.
I've tried doing an RSYNC and it took RSYNC over a day just to scan and create the list of image files. After another day of transferring, it was only 7% complete and had slowed my server down to a crawl, so I had to cancel.
Is there a better way to do this, such as GZIP them to another local hard drive and then transfer / unzip that single file?
I'm also wondering whether it makes sense to store these files in multiple subdirectories or is it fine to have all million+ files in the same directory?
One option might be to perform the migration in a lazy fashion.
All new images go to Amazon S3.
Any requests for images not yet on Amazon trigger a migration of that one image to Amazon S3. (queue it up)
This should fairly quickly get all recent or commonly fetched images moved over to Amazon and will thus reduce the load on your server. You can then add another task that migrates the others over slowly whenever the server is least busy.
Given that the files do not exist (yet) on S3, sending them as an archive file should be quicker than using a synchronization protocol.
However, compressing the archive won't help much (if at all) for image files, assuming that the image files are already stored in a compressed format such as JPEG.
Transmitting ~150 Gbytes of data is going to consume a lot of network bandwidth for a long time. This will be the same if you try to use HTTP or FTP instead of RSYNC to do the transfer. An offline transfer would be better if possible; e.g. sending a hard disc, or a set of tapes or DVDs.
Putting a million files into one flat directory is a bad idea from a performance perspective. while some file systems would cope with this fairly well with O(logN) filename lookup times, others do not with O(N) filename lookup. Multiply that by N to access all files in a directory. An additional problem is that utilities that need to access files in order of file names may slow down significantly if they need to sort a million file names. (This may partly explain why rsync took 1 day to do the indexing.)
Putting all of your image files in one directory is a bad idea from a management perspective; e.g. for doing backups, archiving stuff, moving stuff around, expanding to multiple discs or file systems, etc.
One option you could use instead of transferring the files over the network is to put them on a harddrive and ship it to amazon's import/export service. You don't have to worry about saturating your server's network connection etc.