Disruption during Gzip - Would it damage the file content? - gzip

Please assume, we execute gzip or gunzip on a relatively large text file (>1GB) or files as such within a directory. During the process, either mistakenly or intentionally the process is killed.
Is there any risk that the original content of the files is damaged? If so, is there an advisable practice for executing gzip/gunzip differently, e.g. with certain options, or do we need to wait until gzip/gunzip finalises its execution?

gzip/gunzip should only remove the original file after the operation is complete. To be sure of this, you can pass the -k or --keep option to prevent the original file from being removed at all during your script or operation.

Related

Preventing bzr-update changes to user-specific .cfg file

I have a project, hosted on launchpad, which contains a fairly user-specific configuration file.
Once the project is initially checked out, obviously this .cfg file should also be downloaded. However, further updates (via "bzr update") would ideally not alter this .cfg file, as the user would have made their own edits to it. These edits would be overridden / merged should (with potential conflicts) I push an update using the code with my own .cfg file - I don't want this to happen!
What's the best practice to avoid this? I can't really "bzr ignore", as then any future users checking out via bzr would then not have the .cfg file.
I could, of course, replace my .cfg file with the "stock" one each time I do a commit, but this seems a bit clunky.
Or equivalently clunky, supply the .cfg file separately.
What I'm looking for is a "single-shot" download, but never update subsequently.
Any advice?
This is a tricky problem, because version control systems in general are not engineered to have the fine-grained commit strategies needed for this approach. If you were operating in a controlled environment, you could use plugins or hooks to exclude certain files from commits etc., but that doesn't seem to be an option here. I'll add that bzr ignore won't help you, either, because it only prevents files from being added; it doesn't prevent commits or checkout of those files.
What you can do is generate the config file during build/setup/installation if it doesn't already exist. Something like:
#!/bin/sh
if [ ! -e configuration.cfg ]; then
cp etc/configuration.cfg.in configuration.cfg
fi
Here, you'd check in etc/configuration.cfg.in normally and run the above script at build/setup/installation (this could also be automated by a post_change_branch_tip hook in a controlled environment). You'd put the original in a different directory so that there's less of a risk of it getting edited by accident.

(OS X) Determine if file is being written to?

My app is monitoring a "hot" folder somewhere on the local filesystem for newly added files to push to a network location. I'm running into a problem when very large files are being written into the hot folder: the file system event notifying me of changes in the hot folder will fire well before the file completes writing. When my app tries to upload the file, it mis-reads the file size as the current number of copied bytes, not the eventual total number of bytes.
Things I've tried:
NSURL getResourceValue:forKey:error: to read NSURLAllocatedFileSizeKey (same value as NSURLFileSizeKey while the file is being written).
NSFileManager attributesOfItemAtPath:error: to look at NSFileBusy (always NO).
I can't seem to find any mechanism short of repeatedly polling a file for its size to determine if the file is finished copying and can be uploaded.
There aren't great ways to do this.
If you can be certain that the writer is using NSFileCoordinator, then you can also use that to coordinate your access to the file.
Likewise, if you're sure that the writer has opted in to advisory locking, you could try to open the file for shared access by calling open() with the O_SHLOCK and O_NONBLOCK flags. If you succeed, then there are no other descriptors open for exclusive access. You can either use the file descriptor you've got or close it and then use some other API to access the file.
However, if you can't be sure of any of those, then your best bet may be to set a timer to repeatedly check the file's metadata (size, date modified, etc.). Only when you see that it has stopped changing over a reasonable time interval (2 seconds, maybe) would you attempt to access it (and cancel the timer).
You might want to do all three. Wait for the file's metadata to settle down, then use a NSFileCoordinator to read from the file. When it calls your reader block, use open() with O_SHLOCK | O_NONBLOCK to make sure there are no other processes which have exclusive access to it.
You need some form of coordinated file locking.
fcntl() and flock() are common functions for this.
Read up on it first.
Then see what options you have.
If you can control the code base of those other processes, all the better.
The problem with really large files is that what's changed or changing inside them is opaque and isn't always at the end.
Good processes should generally be doing atomic writes. (Write to a temp file then swap it out) but if these files are actually databases then you will want to look at using the db's server app for this sort of thing.
If the files are wrappers containing other files then it gets extra messy as those contents might have dependencies on one another to be in a usable state.

Can I use an rsync log file from a dry run as an input to a real run?

I like rsync. I can see what files will be deleted first. But what happens if during the backup, a sector of the source disk fails? Files could be deleted from the destination that should not be. However, if I check the log file for all deletion files first, then use the log file as instructions to rsync, then a source disk failure during backup should result in a lower probability of data loss.
I've read the man page and have to conclude that the answer is no. If not rsync, then what?
You can mitigate source disk failure risk using
--delete-after receiver deletes after transfer, not during
That will not delete files if a IO error is produced during copy.
But for ensuring integrity of your backup, I think the right way is using:
--only-write-batch=FILE like --write-batch but w/o updating destination
That will write diffs into a file. Once batch is created, you move it to destination machine, and apply diffs with:
--read-batch=FILE read a batched update from FILE

How can I determine if files in a "drop folder" are completely transfered

Remote clients will upload images (and perhaps some instructional files in specially formatted text) to a "drop folder." Once the upload is complete we need to begin processing these images. It would be an easy, but flawed, solution to just have a script automatically begin processing any files in the folder every few seconds (the files can be move out of the folder once processed); but problems would arise when attempting to process large images which are only partially transfered.
What are some tricks I can use to ensure the files are fully uploaded before processing them?
A few of my own thoughts:
The script can check the validity of the file; ie, a partial jpeg would result in an error and you could respond to that error in the script, this would be fairly CPU intensive though. Some files have special markers on the end, but I can't count on this, I'm not sure what formats I'll be dealing with.
I've heard of "file handles" but haven't really figured out the basics of what they are and how I can tell if there is a "file handle" on a particular file. Basically the FTP daemon (actually, I'm on Windows, so "service") would keep a "handle" on the file while it's being uploaded and you would know not to process that file. These are just a few of my thoughts but I'm not really sure if they will work or if there are better or more accepted ways of solving this problem.
If you have an server-side script upload system (PHP, ASP, JSP, whatever), you could instruct the script to call another script to process the files, or to create a flag-file indicating the upload is done, something like this.
If your server is Linux-based, you can use lsof to check if the file is open. As your ftp/script/cgi will close the file after upload completes, lsof will not show the file in the list.
If your server is Windows-based, you can use Process Explorer to list the open files.
By what method are your users uploading the images?

How to reliably handle files uploaded periodically by an external agent?

It's a very common scenario: some process wants to drop a file on a server every 30 minutes or so. Simple, right? Well, I can think of a bunch of ways this could go wrong.
For instance, processing a file may take more or less than 30 minutes, so it's possible for a new file to arrive before I'm done with the previous one. I don't want the source system to overwrite a file that I'm still processing.
On the other hand, the files are large, so it takes a few minutes to finish uploading them. I don't want to start processing a partial file. The files are just tranferred with FTP or sftp (my preference), so OS-level locking isn't an option.
Finally, I do need to keep the files around for a while, in case I need to manually inspect one of them (for debugging) or reprocess one.
I've seen a lot of ad-hoc approaches to shuffling upload files around, swapping filenames, using datestamps, touching "indicator" files to assist in synchronization, and so on. What I haven't seen yet is a comprehensive "algorithm" for processing files that addresses concurrency, consistency, and completeness.
So, I'd like to tap into the wisdom of crowds here. Has anyone seen a really bulletproof way to juggle batch data files so they're never processed too early, never overwritten before done, and safely kept after processing?
The key is to do the initial juggling at the sending end. All the sender needs to do is:
Store the file with a unique filename.
As soon as the file has been sent, move it to a subdirectory called e.g. completed.
Assuming there is only a single receiver process, all the receiver needs to do is:
Periodically scan the completed directory for any files.
As soon as a file appears in completed, move it to a subdirectory called e.g. processed, and start working on it from there.
Optionally delete it when finished.
On any sane filesystem, file moves are atomic provided they occur within the same filesystem/volume. So there are no race conditions.
Multiple Receivers
If processing could take longer than the period between files being delivered, you'll build up a backlog unless you have multiple receiver processes. So, how to handle the multiple-receiver case?
Simple: Each receiver process operates exactly as before. The key is that we attempt to move a file to processed before working on it: that, and the fact the same-filesystem file moves are atomic, means that even if multiple receivers see the same file in completed and try to move it, only one will succeed. All you need to do is make sure you check the return value of rename(), or whatever OS call you use to perform the move, and only proceed with processing if it succeeded. If the move failed, some other receiver got there first, so just go back and scan the completed directory again.
If the OS supports it, use file system hooks to intercept open and close file operations. Something like Dazuko. Other operating systems may let you know about file operations in anoter way, for example Novell Open Enterprise Server lets you define epochs, and read list of files modified during an epoch.
Just realized that in Linux, you can use inotify subsystem, or the utilities from inotify-tools package
File transfers is one of the classics of system integration. I'd recommend you to get the Enterprise Integration Patterns book to build your own answer to these questions -- to some extent, the answer depends on the technologies and platforms you are using for endpoint implementation and for file transfer. It's a quite comprehensive collection of workable patterns, and fairly well written.