How to remove .efs file extension from 1000's of recovered files in one folder - filenames

I recently recovered a 1.5TB external HDD that crashed. The program I used to recover the files was Active Undelete Enterprise, it's excellent. When the files were successfully recovered they were all saved with a .efs extension so files looked like mydocument.docx.efs. At first I thought they were encrypted and needed to be decrypted, I spent 10 mins on it and realized I just need to remove the .efs from the entire filename and the mydocument.docx works perfectly. Problem is now I have over 55,000 files within hundreds of folders where I need to simply remove the .efs after each file. Does anyone know how to do this?

From a command prompt window, navigate to the top level directory where these files reside.
Type the command
DIR /S/B >>filelist.txt
This command will give you a bare format file listing of the current directory plus all nested subdirectories without any extraneous information. The list will be contained in the text file named "filelist.txt" or whatever else you choose to call it. I would then use this text file in a text editor to convert every line of text from, for example,
C:\Users\dlucas\.gimp-2.8\mathmap\file1.png.efs
to
rename c:\Users\dlucas\.gimp-2.8\mathmap\file1.png.efs file1.png
to give a simple example of a file that I just found on my system using this method.
You will need to use a text editor with a columnar editing capability since you have to modify som many files. Old programmer's editors such as CodeWright made this really simple while modern editors such as Eclipse or Notepad++ make this a little more difficult and may require a columnar editing plugin, depending on version. You basically have to make a columnar copy of all of the text in the file, and then paste the copy off to the far right - far enough that a second column of filenames and paths won't overwrite any of the existing file names and paths. You can then use columnar editing features to select and delete the path names of the text in the 2nd column since the rename command requires that the 2nd argument be simply the base filename and extension without the path information. You can use the columnar editing features to prepend every line with "RENAME ". If you attempt to do this without columnar editing features, you will find it slow going!
An alternate way to do this is to use a command formed from a "regular expression" to create the rename command. If you are not familiar with "regular expressions", ask a programmer friend as this is not an easy topic to learn from scratch. If you are familiar with regular expressions, this is probably the simplest way to perform this task. I haven't used them in many years and no longer recall the exact syntax to use or I would tell you myself.
Regardless of what kind of editor you use, the goal is to turn this ASCII file list of paths and filenames into a batch file (simply rename file1.txt to file1.bat when you are finished editing). You can then run the batch file by typing file1.bat at a command prompt.
I have just run into this same problem myself using the same really wonderful tool that you used. I am writing this while waiting for the undelete program to finish. That it restores files with this extra extension seems very anti-intuitive so I will look for an option to make it not do this when it finishes. If I find one, I will post a new answer here that is more specific to this tool. Otherwise, I am going to have rename all kazillion files just as you had to.

You experienced this problem because the disk that you recovered your files to "does not support encryption", according to the Active# UNDELETE documentation. The documentation offers no further explanation of what kind of disks support encryption, etc.
They offer a Decrypt command that restores the file's proper names as a post processing step. Unfortunately, this requires that you "include" each and every file to be decrypted, with no support for wildcards and parsing subdirectories so that is a non-starter, in my opinion given that both of us have hundreds of thousands of files to be renamed.
I did find that by selecting a normal fixed (non-removable) hard drive as the destination of the recovery effort, that the resulting files do not end up encrypted (i.e., they are recovered with the proper file name and extension). I originally chose a large USB based flash drive and the files were stored in their "encrypted" state (not really encrypted, but possibly potentially so and thus they give the .efs extension). Of course, this meant that I had to run the command all over again after switching to a regular hard drive (takes about 16 hours to recover 80GB worth of files due to presence of many sector CRC errors).

Related

Attaching a specific piece of non-intrusive info to a file or folder to keep a connection to a program

This is going to be a question with a lot of hypotheticals, but it's been on my mind for a while now and I finally want to get some perspectives on how to tackle this "issue". For the sake of the question, I'll make up an example requirement of how the program I want to make would work on a conceptual level without too many specifics.
The Problem
I want to create a program to keep track of miscellaneous info for files and folders. This miscellaneous info can be anything from comments, authors, to more specific info like the original source of the file (a URL for example), categories, tags, and more. All this info is kept track of in an SQLite database.
Now... how would you create a connection to the file (or folder) to the database? Whatever file is added to the program, the file should continue to operate on an independent level from the program, meaning you should be able to edit, copy, move, rename or do anything else with the file you would usually do with your OS of choice - even deleting it.
You should even be able to archive it, zip it, upload it somewhere or do other things that temporarily or permanently removes the file from your system, without losing the connection to the database. The program itself doesn't actually ever touch the files themselves, unless to generate a new entry in the database, but obviously, there should be some kind of reference in the file to a database entry in the program.
Yes, I know that if you delete the file, you would have a dead entry in the database. For now, just treat this as an unfortunate reality that can't be solved unless you incorporate the file more closely into the program.
Possible solutions and why I decided against them
Reference inside Filename
Probably the most obvious choice, you could just have a reference inside the filename to point to a database entry, for example by including the id at the start of the filename:
#1 my-example-file.txt
#12814 this-is-one-of-many-files.txt
Obviously, that goes against what I established earlier, as you would be restricted from freely renaming the file. You would always have to keep in mind to not mess with the id inside the filename, or else the connection to your program is broken. Unfortunately, that is the best bet I currently have, but I would like to avoid using that approach if possible.
Alternate Data Streams (ADS)
A pretty cool feature I recently discovered that's available on NTFS file systems, ADS allows you to store different streams of data for your files, to grossly simplify it. You could attach a data stream to your file that saves the id for the database entry in the program, and a regular user would never be able to mess directly with that.
However, since this is a feature reserved for specific file systems, there's some ugly side effects to ADS, as you can easily lose that part of the file by:
moving/copying it to a file system that doesn't support ADS, such as the file systems most often used in removable drives
uploading it to a cloud then later downloading it
moving it to another OS that might not support ADS or treats it in an unexpected way
zipping it
Thus I can't really rely on ADS either.

Prevent renaming of file from another binary on Mac OS

I am working with multiple processes that write to the same directory.
I have a directory dir1/
My process creates a file a.txt under dir1/. However the other process creates a-temp1.txt and renames it to a.txt. I don't have control over the other process since that code comes from a library. Can I prevent a-temp.txt from being renamed?
There's nothing you can do that the other process can't undo. Your best hope (other than changing your program to work sanely) is that the other process doesn't try too hard to do the rename. That is, it tries the simple approach and gives up if that fails.
In particular, you can set the UF_IMMUTABLE flag on either file and that will prevent one from being renamed to replace the other. You can set the flag using chflags(). Using Cocoa, you could also use [someURL setResourceValue:#YES forKey:NSURLIsUserImmutableKey error:NULL].
Keep in mind that you won't be able to change the file in any other way, either, until that flag is removed. If the other process is determined to rename the file, it has permission to remove the flag just like your process does.
Also keep in mind that a system such as this is inherently race-prone.
You really ought to use separate names for the files, or separate directories, or ditch that library that doesn't give you the control you need.
Set the user immutable flag chflags(...,uchg). This will keep the other process from changing your file unless it takes action to clear the bit. Of course I don't know how the other process will react to you putting things in it's way, but that wasn't the question.
You can use chflags() on an HFS+ (Mac OS X) file system to set the UF_APPEND attribute. (Do a man 2 chflags.) That will permit appending to the file, but not deleting or renaming, even by the same user.
You can, but it unlikely will solve your problem. I strongly suspect this is an X-Y problem, and almost certainly the correct solution is to redesign some part of this system entirely, probably by changing your file names, using unique temporary files, moving to another directory, or reworking the usage of the library (libraries only do what callers tell them to do; and libraries are just code anyway). You shouldn't try to defeat another process; you're all working for the same user.
All that said, sure, you can prevent your own userid from renaming over file. Just deny yourself permission. You can modify the file:
chmod 400 a.txt
That says that you can read the file but may not write it. However, if you already have an open file handle, you may continue to use it (so you can keep writing to the file, even though another process running as the same user may not).
Similarly, you may change permissions on the directory:
chmod 500 .
This would prevent the rename because file names are kept in the directory.

How to put files inside files

MS Word's .docx files contain a bunch of .xml files.
Setup.exe files spit out hundreds of files that a program uses.
Zips, rars etc also hold lots of compressed stuff.
So how are they made? What does MS Word or another program that produces these files have to do to put files inside files?
When I looked this up I just got a bunch of results about compression, but let's say I wanted to make a program that 'wraps' files inside a file without making the final result any smaller. What would I even have to write?
I'm not asking/expecting any source code that does this, I just need a pointer. Is there something you think I'm misunderstanding based on what I've asked here?
Even a simple link to an article or some documentation would be greatly appreciated.
Ok, I'll just come up with some headers for ordinary files and write them along with the bytes of the actual files into one custom-defined file. You guys were very helpful, thank you!
Historically, Windows had a number of technologies to support solutions like this. These were often called Compound Files or Structured storage. However, I don't think the newer Office documents use these technologies. I think the Office file formats are similar to ZIP files with a different extensions. If you change a file with .docx extension to .zip and open it with your favorite compression tool, you'll see a bunch of folders and XML files.
Here are some links to descriptions of different file formats that create "files within files"
Zip file format
Compound File Binary Format (CFBF)
Structured Storage
Compound Document File Format
Office Open XML I: Exploring the Office Open XML Formats
At least on POSIX systems (e.g. Linux), a file is only a stream (i.e. a sequence) of bytes. And you can only grow (or shrink, i.e. truncate) it at the end - there is no way to insert bytes in the middle (without copying the rest).
You need some conventions, and some additional software, to handle it otherwise.
You might be interested in Sqlite, which gives you a library to handle some (e.g.) *.sqlite file as an SQL database
You could also use GDBM - a library giving you some indexed file abstraction.
libtar is a library to manipulate tar archives. See also tardy, a tar file postprocessor.

Implement a self extracting archive?

I know i can use 7z or winrar but i want to learn this for myself.
How would i implement a self extracting archive? I can use C# or C++ but let me run down the problem.
When i open the exe i need some kind of GUI asking where to extract the files. Once the user says ok I should obviously extract them. I implemented a simple example in C# winforms already BUT my problem is HOW do i get the filenames and binary of the files into an exe?
One upon a time i ask Is it safe to add extra data to end of exe? and the answer suggested if i just add data to the end of the exe it may be picked up by a virus scanner. Now its pretty easy to write the length of the archive as the last 4bytes and just append the data to my generic exe and i do believe my process can read my own exe so this could work. But it feels hacky and i rather not have people accuse me of writing virus just because i am using this technique. Whats the proper way to implement this?
Note: I checked the self-extracting tag and many of the question is how to manipulate self extracting and not how to implement. Except this one which is asking something else Self-extracting self-checking executable
-edit- I made two self extracting with 7z and compared them. It looks like... well it IS the 7z.sfx file but with a regular 7z archive appended. So... there is nothing wrong with doing this? Is there a better way? I'm targeting windows and can use the C# compiler to help but i don't know how much extra work or how difficult it may be programmatically and maybe adding data to end of exe isnt bad?
It is possible. I used the following technique once, when we needed to distribute updates for the application, but the computers were configured so that the end user had no permissions to change application files. The update was supposed to log on to administrator account and update required files (so we came across identical problem: how to distribute many files as a single executable).
The solution were file resources in C#. All you need to do is:
Create a resource file in your C# project (file ending with .resx).
Add new resource of type "file". You can easily add existing files as byte[] resources.
In program you can simply extract resource as file:
System.IO.FileStream file = new System.IO.FileStream("C:\\PathToFile",
System.IO.FileMode.OpenOrCreate);
System.IO.BinaryWriter writer = new System.IO.BinaryWriter(file);
writer.Write(UpdateApplication.Data.DataValue, 0, UpdateApplication.Data.DataValue.Length);
(Here UpdateApplication.Data denotes binary resource).
Our solution lacked compression, but I believe this is easily achieved with libraries such as C#ZipLib.
I hope this solution is virus-scanner-safe, as this method creates complete, valid executable file.

Find duplicate PDFs

I'm looking for a utility that will help me find duplicate PDFs. The problem: I have a 1000s of PDF files. Some are duplicates. They are not easy to detect due differing files names and small differences in file size. Is there a utility/algorithm/library that can help me find the duplicates or show me files that are very similar (or degree of difference)?
Create an MD5 hash for each file and store it in a database. Identical files will then sort next to each other, or you can quickly search for a pre-existing key.
The problem is not yet solved in any way. What I do, is I use fdupes http://premium.caribe.net/~adrian2/fdupes.html to find exact duplicates.
But most of all, I use a workflow which minimizes duplicates. Every document that enters my system gets indexed with this perl-script I wrote: http://seegras.discordia.ch/Programs/fileindex which puts some name and an md5-sum of it into ~/.fileindex.md5 Now I can change metadata of the local PDF-files or whatever (and run fileindex again), and whenever I accidently download the same file again, I will stil lhave the md5-sum of the original file, and thus can detect whether it's a duplicate.
There's also exif-meta and exif-rename on http://seegras.discordia.ch/Programs/ which help with setting PDF metadata and with renaming PDF-files according to metadata; and if you're tagging all the files correctly, you will end up with duplicate filenames, indicating that they might be the same document within a different file.
If the files were created by the different tools, they could look the same but generate very different results because they are structured totally differently. I made some suggestions in a blog article at https://blog.idrsolutions.com/2010/09/comparing-2-pdf-files/
DiffPDF looks like something that might help you.
I remember that there is a UNIX utility called pdf2txt (see the package poppler-utils). You can try to extract the text from the files and make a textual diff.