I'm trying to do a bruteforce attack on a rar archive and I need the password-check to be as fast as possible. I call rarlab's "unrar" command line utility from my program in this way:
unrar t -p[password] archive.rar
And it works, but is extremely slow. The encrypted file inside the archive is about 300MB big, and unrar utility tells me there's a CRC error (wrong password) only after testing the whole file size. (which takes 10-15 secs)
Is there a quicker way to test just the archive password?
Look with rar l if the archive contains more files than the "main" file you'd like to extract. There are usually .txt or .nfo files contained in the archive with few KB-s. You can then execute brute force attack to extract only the smallest file in archive with rar -ppassword x <archive> <file> what should be much faster.
See
How to crack AES-128 encryption used in WinRar?
As to your question, no, there is no way to test just the password. The password is not stored in the encrypted archive file. AFAIK, any password you give, combined with the encrypted data, will produce decrypted data. In one universe or another, the decrypted data represent a valid RAR archive. CRC checks that the archive can be considered valid, if it fails, this means the universe in which the password is valid is not the same as your universe ;)
Related
In the documentation sample code for how to deal with user uploaded files, they save it as a trusted filename for filestorage via GetRandomFileName, and a trusted filename for HTML display.
In the comments it says: "In most production scenarios, an anti-virus/anti-malware scanner API is used on the file before making the file available for download or for use by other systems."
Is that going to be before it is saved with a random filename or after? Because that is the point of saving it as a random filename, so that it doesn't get executed? And when the scanning is done, how is the file going to be made available? I guess the file just has to be renamed if it passes the scan or else deleted? If so, what is the proper way to get the original file extenstion? And do you know of any good scanners that are gratis that are popular to use?
I try to learn web development. Thanks for your time and help.
The renaming of the file here has nothing to do with the anti-virus protection. The files don't tend to execute themselves whatever their name is. Same with the virus scan: it's not for the server protection, it's for the users protection. If your server executes the binary it gets from the client, it's a security breach regardless of whether it's a virus or not.
The renaming here is probably done just to be able to store the duplicates. That being said, in the production scenarios you'll probably never store the incoming files as physical files on the FS. They usually go to the DB as blobs, so the name is not an issue.
This is just a sample app designed to teach how to work with binary streams and file controllers. Don't expect too much from it in terms of applicability to the real solutions.
I have a website that has the old "list files" style of doing things, and I want to perform a hash on a file there before downloading it to the user's local system. I know how to hash a local file, but it seems there's not a lot of info as to whether or not I can do this without downloading the online file. My logic is, if the user already has the same file, why waste time downloading it? So, is it possible to do this?
After further contemplation I decided that the date modified comparison is actually the behavior that I want. If a client were to modify a file on accident, there is now an option to correct it. If they modify it on purpose, I certainly don't want to wipe out their work.
I create disk encrypt in mac OS X ML 10.8 (use Disk utiliti or use command hdiutil ). I want read file in that disk, but I can't mount it. Because when I mount it, another app can read it before I unmount. Please help me.(hdiutil command here http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/hdiutil.1.htm
To do this you would have to read and decrypt the dmg file yourself and then interpret the HFS file system inside the disk image to get at your file. It's not easy but certainly possible. Take a look at the HFSExplorer source code.
But I wouldn't put too much energy into this. Either use a different file format that is easier to read to store your encrypted data, or go with pajps solution. And remember, no matter what you do, once you decrypt your file the user will be able to get to the decrypted data. You can make this harder, but you can't prevent it.
I think the only reasonable way would be to mount the disk image. To do it securely, you can use the -mountrandom and -nobrowse options to hdiutil attach. This will mount the disk image in a randomized path name, and prevent it from being visible in the UI.
hdiutil attach -mountrandom /tmp -nobrowse /tmp/secret_image.dmg
Assuming the disk image has one and exactly one HFS partition, you can parse the randomized mount path like this:
hdiutil attach -mountrandom /tmp -nobrowse /tmp/secret.dmg | awk '$2 = /Apple_HFS/ { print $3 }'
Or you can use the -plist option to get the output in plist XML format that can be parsed using XML tools or converted to json using plutil -convert json.
Of course, an attacker that has root access can still monitor for new mounts and intercept your disk image before you have the chance to unmount it, but if your attacker has root than pretty much all bets are off.
I know that Windows can intrinsically detect and verify signatures of PEs and some types of text file (.vbs, .ps and .wsf). However I'm curious whether there is a way to somehow attach or associate a signature to a file that doesn't directly support signatures, such as .ISO or .zip files.
Drivers packages that contain a mixture of binaries and .inf files use signed .cat files to allow their constituents to be signed indirectly, but you have to use "signtool.exe verify" to validate the file and I am getting mixed results with this approach.
I guess I am looking for some kind of signed manifest file that we can use to allow users to easily verify that the set of files they downloaded haven't been corrupted in transit or by a third party, and which doesn't involve them creating MD5's manually and comparing the results with values stored in a text file (which might also have been diddled with).
NTFS's Alternate Data Streams seem like a good fit for storing the signatures - this would allow you to attach a signature to any kind of file, so you wouldn't need a separate manifest.
You would of course still need to develop an application to verify the signatures - there is no way around that.
We put hundreds of image files on Amazon S3 that our users need to synchronize to their local directories. In order to save storage space and bandwidth, we zip the files stored on S3.
On the user's end they have a python script that runs every 5 min to get a current list of files, and download new/updated files.
My question is what's the best way determine what is new or changed to download?
Currently we add an additional header that we put with the compressed file which contains the MD5 value of the uncompressed file...
We start with a file like this:
image_file_1.tif 17MB MD5 = xxxx1234
We compress it (with 7zip) and put it to S3 (with Python/Boto):
image_file_1.tif.z 9MB MD5 = yyy3456 x-amz-meta-uncompressedmd5 = xxxx1234
The problems is we can't get a large list of files from S3 that include the x-amz-meta-uncompressedmd5 header without an additional API for EACH one (SLOW for hundreds/thousands of files).
Our most practical solution is have users get a full list of files (without the extra headers), download the files that do not exist locally. If it does exist locally, then do and additional API call to get the full headers to compare local MD5 checksum against x-amz-meta-uncompressedmd5.
I'm thinking there must be a better way.
You could include the MD5 hash of the uncompressed image into the compressed filename.
So image_file_1.tif could become image_file_1.xxxx1234.tif.z
Your user python file which does the synchronising would therefore have the information needed to determine if it needed to go get the file again from S3, and could either strip out the MD5 part of the filename, or maintain it, depending on what you wanted to do.
Or, you could also maintain, on S3, a single file containing the full file list including the MD5 metadata. So the python script just need to fetch that single file, parse that, and then decide what to do.