Compare files via CRC - file-io

I have 2 zip files. Inside each zip file there are multiple text and binary files. However not all files are the same. Some files are different due to time stamp and other data, others are identical.
Can I use CRC to definitively prove that specific files are identical?
Example: I have file A,B,C in both archives. Can I use CRC to prove that A,B,C files is identical in both archives?
Thank you.

Definitively? No - CRC collisions are perfectly possible, just very improbable.
If you need absolute proof then you're going to need to compare the files byte-for-byte. If you just mean within the expectations of everyday use, sure. If the filesize is the same and the CRC is the same then it's very very likely the files are the same.

Related

Strings indexing tool for binary files

Very often I've to deal with very large binary files (from 50 to 500Gb), in different formats, which contains basically mixed data including strings.
I need to index the strings inside the file, creating a database or an index, so I can do quick searches (basic search or complex with regex). The output of the search should be of course the offset of the found string in the binary file.
Does anyone know a tool, framework or library which can help me on this task?
You can run 'strings -t d' (Linux / OS X) on it to pull out strings with their corresponding offset and then put that into Solr or Elastic. If you want more than just ASCII though, it gets more complex.
Autopsy has its own strings extraction code (for UTF-8 and UTF-16) and puts it into Solr (and uses Tika if the file format is supported), but it doesn't record the offset from a binary file, so it may not meet your needs.

Notating large batch of files

I have about 30,000 different files all with different file formats names. I want to put together a list of "unique" files given that the dates/etc. are replaced by generic characters/symbols.
For example:
20160105asdf_123456_CODE.txt
Would be notated into:
YYYYMMDD*_######_XXXX.txt
Any ideas on how to do this efficiently on a large scale? I thought about parsing it out per delimiter ("_"), but I'm sure there's something a lot easier out there.

ETL file loading: files created today, or files not already loaded?

I need to automate a process to load new data files into a database. My question is about the best way to determine which files are "new" in an automated fashion.
Files are retrieved from a directory that is synced nightly, so the list of files keeps growing. I don't have the option to wipe out files that I have already retrieved.
New records are stored in a raw data table that has a field indicating the filename where each record originated, so I could compare all filenames currently in the directory with filenames already in the raw data table, and process only those filenames that aren't in common.
Or I could use timestamps that are in the filenames, and process only those files that were created since the last time the import process was run.
I am leaning toward using the first approach since it seems less prone to error, but I haven't had much luck finding whether this is actually true. What are the pitfalls of determining new files in this manner, by comparing all filenames with the filenames already in the database?
File name comparison:
If you have millions of files then comparison might not what you are
looking for.
You must be sure that the files in the said folder never gets
deleted.
Get filenames by date:
Since these filenames are retrieved once a day can guarantee the
accuracy. (Even they created in millisecond difference)
Will be efficient if many files are there.
Pentaho gives the modified date not the created date.
To do either of the above, you can use the following Pentaho step.
Configuration Get File Names step:
File/Directory: Give the folder path contains the files.
Wildcard (RegExp): .*\.* to get all or .*\.pdf to get specific
format.

Processing Files - Keeping Track

Currently we have an application that picks files out of a folder and processes them. It's simple enough but there are two pretty major issues with it. The processing is simply converting images to a base64 string and putting that into a database.
Problem
The problem is after the file has been processed, it won't need processing again and for performance reasons we don't really want it to be so.
Moving the files after processing is also not an option as these image files need to always be available in the same directory for other parts of the system to use.
This program must be written in VB.NET as it is an extension of a product already using this.
Ideal Solution
What we are looking for really is a way of keeping track of which files have been processed so we can develop a kind of ignore list when running the application.
For every processed image file Image0001.ext, once processed create a second file Image0001.ext.done. When looking for files to process, use a filter on the extension type of your images, and as each filename is found check for the existence of a .done file.
This approach will get incrementally slower as the number of files increases, but unless you move (or delete) files this is inevitable. On NTFS you should be OK until you get well into the tens of thousands of files.
EDIT: My approach would be to apply KISS:
Everything is in one folder, therefore cannot be a big number of images: I don't need to handle hundreds of files per hour every hour of every day (first run might be different).
Writing a console application to convert one file (passed on the command line) is each. Left as an exercise.
There is no indication of any urgency to the conversion: can schedule to run every 15min (say). Also left as an exercise.
Use PowerShell to run the program for all images not already processed:
cd $TheImageFolder;
# .png assumed as image type. Can have multiple filters here for more image types.
Get-Item -filter *.png |
Where-Object { -not (Test-File -path ($_.FullName + '.done') } |
Foreach-Object { ProcessFile $_.FullName; New-Item ($_.FullName + '.done') -ItemType file }
In a table, store the file name, file size, (and file hash if you need to be more sure about the file), for each file processed. Now, when you're taking a new file to process, you can compare it with your table entries (a simple query would do). Using hashes might degrade your performance, but you can be a bit more certain about an already processed file.

Why are an integers bytes stored backwards? Does this apply to headers only?

I'm currently trying to decipher WAV files. From headers to the PCM data.
I've found a PDF (http://www.tdt.com/T2Support/technical_notes/tn0132.pdf) detailing the anatomy of a WAV file, and I've been able to extract and make sense of the appropriate header data using Ghex2. But my questions are:
Why are the integers bytes stored backwards? I.e. dec. 20 is stored as 0x14000000 instead of 0x00000014.
Are the integers of the PCM data also stored backwards?
WAV files are little-endian (least significant bytes first) because the format originated for operating systems running on intel processor based machines which use the little endian format to store numbers.
If you think about it kind of makes sense because if you want to cast a long integer to a short one or even a character the starting address remains the same you just look at less bytes.
Consequently, for 16 bit encoding upwards, little-endian format will be used for the PCM as well. This is quite handy since you will be able to pull them in as integers. don't forget they will be stored as two's complement signed integers if they are 16 bit, but not if they are 8 bit. (see http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/WAVE.html for more detail)
"Backwards" is subjective. Some machines are big-endian, others are little-endian. In byte-oriented contexts like file formats and network protocols, the order is arbitrary. Some formats like to specify big- or little-endian, others like to be flexible and accept either form, with a flag indicating which is in use.
Looks like WAV files just like little-endian.