I would like to use Archive Utility.app in an app I'm writing to compress one or more files.
Doing (from the command line):
/System/Library/CoreServices/Archive\ Utility.app/Contents/MacOS/Archive\ Utility file_to_zip
Does work but it creates a cpgz file, I could live with that (even though .zip would be better) but the main problem is that I am not able to compress 2 files into 1 archive:
/System/Library/CoreServices/Archive\ Utility.app/Contents/MacOS/Archive\ Utility ~/foo/a.txt ~/bar/b.txt
The above command will create 2 archives (~/foo/a.txt.cpgz and ~/bar/b.txt.cpgz).
I cannot get this to do what I want either:
open -a /System/Library/CoreServices/Archive\ Utility.app --args xxxx
I'd rather not use the zip command because the files that are to be compressed are rather large so it would be neat to have the built in progress bar.
Or, could I use Archive Utility programmatically?
Thanks.
Archive Utility.app uses the following to create its zip archives:
ditto -c -k --sequesterRsrc --keepParent Product.app Product.app.zip
Archive Utility.app isn't scriptable.
The -dd/--display-dots option will cause the command-line zip utility to displays progress dots when compressing. You could parse the output of zip for your progress bar. zip will also display dots if it takes more than five seconds to scan and open files.
Better would be to integrate a compression library, such as zlib or libbzip2. Both of those let you compress a portion of the data at a time. You'll need to handle progress bar updates, which you can do after compressing each block of data.
How about Automator? The "Create Archive" action would work.
I have used Archive Utility to decompress files from Applescripts:
tell application "Archive Utility" to open filePath
However, this only tells Archive Utility to start decompressing. It will complete in its own time and the applescript will continue to execute without waiting for the decompression to finish. Archive Utility will not tell the applescript when it is done.
You can use the above line if Archive Utility is already running. It will add the new file to its queue and decompress when ready.
Also, Archive Utility's preferences will control how it accomplishes the goal. For example: it might decompress to a different folder or delete the original. You can run it as a program and change the preferences if that helps.
I have not used this to get Archive Utility to compress files.
Related
I am running my simulations with Eclipse E100, and the run generates a file with extension .unrst which takes a lot of space in the folder
Is there a way to stop the simulation from generating this file
Best Regards
If you don't want unified restart files select summmary, .UNSMRY (unified) or .Snnnn, or non-unified restart (.Xnnnn).
FMTOUT for format and/or UNIFOUT for unification (Details).
I have a large batch of assorted files, all missing their file extension.
I'm currently using Windows 7 Pro. I am able to "open with" and experiment to determine what application opens these files, and rename manually to suit.
However I would like some method to identify the correct file type (typically PDF, others include JPG, HTML, DOC, XLS and PPT), and batch rename to add the appropriate file extension.
I am able to open some files with notepad and review the first four bytes, which in some cases shows "%PDF".
I figure a small script would be able to inspect these bytes, and rename as appropriate. However not all files give such an easy method. HTML, JPG, DOC etc do not appear to give such an easy identifier.
This Powershell method appears to be close: https://superuser.com/questions/186942/renaming-multiple-file-extensions-based-on-a-condition
Difficulty here is focusing the method to work on file types with no extension; and then what to do with the files that don't have the first four bytes identifier?
Appreciate any help!!
EDIT: Solution using TriD seen here: http://mark0.net/soft-trid-e.html
And recursive method using Powershell to execute TriD here: http://mark0.net/forum/index.php?topic=550.0
You could probably save some time by getting a file utility for Windows (see What is the equivalent to the Linux File command for windows?) and then writing a simple script that maps from file type to extension.
EDIT: Looks like the TriD utility that's mentioned on that page can do what you want out of the box; see the -ae and -ce options)
Use python3.
import os,re
fldrPth = "path/to/folder" # relative to My Documents
os.chdir(fldrPth)
for i in os.listdir():
with open(i,'r') as doc:
st = doc.read(4)
os.rename(i,i+'.'+re.search(r'\w+',st).group())
Hopefully this would work.
I don't have test files to check the code. Take a backup and then run it and let me know if it works.
I have a script file. Unfortunately I've overridden it with some other data. I need the old data. crtl+Z is not working.
How do I recover it?
unfortunately some editors are not supporting of crtl + Z so only not able to recover the data..
are you using file versioning? what OS, what version?
What is the File structure? NTFS?
if you overwrite a file in NTFS it tends to delete the first file and put in the second, not systematically destroy the file (so you can recover the file with an undelete utility.
first, rename the current file, then open an undelete utility (you don't want to download anything to this computer as you may overwrite the file.)
run it from a memory stick.
the safest thing to do if it's critical is to image the device off to a donor scratch media and work from there.
it is possible to use DotNetZip to create a zip from an accessed file (eg log file from another application) ?
so create a zip when the log file gets written through the other application
Hmm, well, yes, if you are willing to write some code.
One way to do it is to compress the file AFTER it has been written and closed.
You would need to have an app that runs with a filesystem watcher, and when it sees the log file being closed, it compresses that log file into a zip.
If you mean to imply, a distinct app that writes to a file and it automagically writes into a zip file, no I don't know of a simple way to do that. There is one possibility: if the 3rd party app accepts a System.IO.Stream in which to write the log entries. In that case, you can do that with DotNetZip. You can get a writeable stream from DotNetZip, into which the app writes content. It is compressed as it is written, and when the writing is complete, DotNetZip closes the zipfile. To use this, check the ZipFile.AddEntry() method that accepts a WriteDelegate. It's in the documentation.
Remote clients will upload images (and perhaps some instructional files in specially formatted text) to a "drop folder." Once the upload is complete we need to begin processing these images. It would be an easy, but flawed, solution to just have a script automatically begin processing any files in the folder every few seconds (the files can be move out of the folder once processed); but problems would arise when attempting to process large images which are only partially transfered.
What are some tricks I can use to ensure the files are fully uploaded before processing them?
A few of my own thoughts:
The script can check the validity of the file; ie, a partial jpeg would result in an error and you could respond to that error in the script, this would be fairly CPU intensive though. Some files have special markers on the end, but I can't count on this, I'm not sure what formats I'll be dealing with.
I've heard of "file handles" but haven't really figured out the basics of what they are and how I can tell if there is a "file handle" on a particular file. Basically the FTP daemon (actually, I'm on Windows, so "service") would keep a "handle" on the file while it's being uploaded and you would know not to process that file. These are just a few of my thoughts but I'm not really sure if they will work or if there are better or more accepted ways of solving this problem.
If you have an server-side script upload system (PHP, ASP, JSP, whatever), you could instruct the script to call another script to process the files, or to create a flag-file indicating the upload is done, something like this.
If your server is Linux-based, you can use lsof to check if the file is open. As your ftp/script/cgi will close the file after upload completes, lsof will not show the file in the list.
If your server is Windows-based, you can use Process Explorer to list the open files.
By what method are your users uploading the images?