How to set FAT32 short names in Powershell or Windows Command Shell - fat32

I need to write a script that sets the short name of a file on a FAT32 file system. On NTFS I can use the FSUTIL utility under windows but I cannot seem to fathom out how to do this for a FAT32 drive.
Bonus kudos for a window command or powershell script

They made this super hard to get at, but here is one solution:
Converting the FileSystemObject's ShortName Property

Unfortunately it's not possible because it's the limitation of the SetFileShortName() Win32 API
Sets the short name for the specified file. The file must be on an NTFS file system volume.
You can manually hex edit the FAT32 partition to set the short names update the checksums but it'll be quite fragile without support from the file system driver

Related

What is the expected behavior for a self-deleting executable in OS X, Linux, and Windows?

What is supposed to happen to an executable that deletes itself as part of its execution? Are the rules different for different OSes? Does it depend on the executable format (eg PE, Mach-O, etc) or on something else?
Specifically, I want to know about the expected behavior for a self-deleting executable in OS X, Linux, and Windows. If they are different, I want to know why.
motivation:
I work on a project that has a "nuclear" build clean up command:
jlpm clean:slate
The above command completely cleans up and uninstalls everything related to the project, including the jlpm executable itself. On OS X/Linux the clean:slate command works fine, but I've been told it doesn't work on Windows. I'm curious as to why, and how I should go about fixing it
Are the rules different for different OSes?
Yes.
Does it depend on the executable format (eg PE, Mach-O, etc)
No, executable format is irrelevant.
Traditional UNIX implementations keep a reference count on the file inode. When a regular file is on disk and no program has opened it, it has a reference count of 1 (assuming there are no hard links to it). The 1 comes from directory in which the file appears.
If you then rm the file, the inode reference count drops to 0, which signals to the OS that it is no longer needed, and all data associated with it can be discarded.
When some program opens the file (or the file is executing), the inode reference count is incremented (now 2). If you now remove the file from directory, inode reference count drops to 1, but the file is still there, so there is no problem.
(This is how you could hog disk space on a machine in a way that is "invisible" to the system administrator.)
Windows do not have such reference counting, and attempts to remove open file fail. This causes no end of problems for UNIX programmers.
how I should go about fixing it
See answers to this question.

How to remove .efs file extension from 1000's of recovered files in one folder

I recently recovered a 1.5TB external HDD that crashed. The program I used to recover the files was Active Undelete Enterprise, it's excellent. When the files were successfully recovered they were all saved with a .efs extension so files looked like mydocument.docx.efs. At first I thought they were encrypted and needed to be decrypted, I spent 10 mins on it and realized I just need to remove the .efs from the entire filename and the mydocument.docx works perfectly. Problem is now I have over 55,000 files within hundreds of folders where I need to simply remove the .efs after each file. Does anyone know how to do this?
From a command prompt window, navigate to the top level directory where these files reside.
Type the command
DIR /S/B >>filelist.txt
This command will give you a bare format file listing of the current directory plus all nested subdirectories without any extraneous information. The list will be contained in the text file named "filelist.txt" or whatever else you choose to call it. I would then use this text file in a text editor to convert every line of text from, for example,
C:\Users\dlucas\.gimp-2.8\mathmap\file1.png.efs
to
rename c:\Users\dlucas\.gimp-2.8\mathmap\file1.png.efs file1.png
to give a simple example of a file that I just found on my system using this method.
You will need to use a text editor with a columnar editing capability since you have to modify som many files. Old programmer's editors such as CodeWright made this really simple while modern editors such as Eclipse or Notepad++ make this a little more difficult and may require a columnar editing plugin, depending on version. You basically have to make a columnar copy of all of the text in the file, and then paste the copy off to the far right - far enough that a second column of filenames and paths won't overwrite any of the existing file names and paths. You can then use columnar editing features to select and delete the path names of the text in the 2nd column since the rename command requires that the 2nd argument be simply the base filename and extension without the path information. You can use the columnar editing features to prepend every line with "RENAME ". If you attempt to do this without columnar editing features, you will find it slow going!
An alternate way to do this is to use a command formed from a "regular expression" to create the rename command. If you are not familiar with "regular expressions", ask a programmer friend as this is not an easy topic to learn from scratch. If you are familiar with regular expressions, this is probably the simplest way to perform this task. I haven't used them in many years and no longer recall the exact syntax to use or I would tell you myself.
Regardless of what kind of editor you use, the goal is to turn this ASCII file list of paths and filenames into a batch file (simply rename file1.txt to file1.bat when you are finished editing). You can then run the batch file by typing file1.bat at a command prompt.
I have just run into this same problem myself using the same really wonderful tool that you used. I am writing this while waiting for the undelete program to finish. That it restores files with this extra extension seems very anti-intuitive so I will look for an option to make it not do this when it finishes. If I find one, I will post a new answer here that is more specific to this tool. Otherwise, I am going to have rename all kazillion files just as you had to.
You experienced this problem because the disk that you recovered your files to "does not support encryption", according to the Active# UNDELETE documentation. The documentation offers no further explanation of what kind of disks support encryption, etc.
They offer a Decrypt command that restores the file's proper names as a post processing step. Unfortunately, this requires that you "include" each and every file to be decrypted, with no support for wildcards and parsing subdirectories so that is a non-starter, in my opinion given that both of us have hundreds of thousands of files to be renamed.
I did find that by selecting a normal fixed (non-removable) hard drive as the destination of the recovery effort, that the resulting files do not end up encrypted (i.e., they are recovered with the proper file name and extension). I originally chose a large USB based flash drive and the files were stored in their "encrypted" state (not really encrypted, but possibly potentially so and thus they give the .efs extension). Of course, this meant that I had to run the command all over again after switching to a regular hard drive (takes about 16 hours to recover 80GB worth of files due to presence of many sector CRC errors).

Looking for fast "Find in Files" program

I currently have a directory with 98,000 individual archive transaction files. I need to search those files for user input strings and have the option to open the files as it finds them or at the end of the search. I'm using Notepad++ currently and, while functional, it's quite slow. I thought about writing my own, but I am only familiar with .NET and I'm a beginner. Also, I'm not sure how efficient that would be compared to NP++.
This tool would be used again and again so the dev time would definitely be worth it if it came to that. Is there some other tool out there that's already developed that would accomplish this?
Agent Ransack
I've been using it for years.
I recommend you using Astrogrep, a grep utility for Windows. You can open files as it finds them, and it shows you the line where the match was found, without having to open the file.
Assuming the archive transaction files are plain text, you can download CYGWIN which is an environment providing UNIX tools for Windows.
Once that's done, you can open a new Cygwin Bash Shell, then do cd 'c:\\foo' to get into the directory with your files, then do grep -F -r "my string" * to find your text. (The -F means it searches for that literal string as opposed to a regular expression and -r means recursive.)
Possibly overkill, but you could index the folder using Lucene, keep the index uptodate (as transaction files are added) and then searches will take trivial amounts of time, you can target the file, line and word number of each match for a given search string

files opened by a process on VMS

I have a DCL script on VMS which calls a perl script. Is there a VMS/DCL command I can use that will tell me every file handle opened by the perl script?
Set default to the disk the app runs from (or you might have to try each disk in succession if it's a really large or distributed app). Then the command is
show device/files/nosystem
If you're on a more recent version of VMS and the lists are too long, you can pipe it with a search by doing this:
pipe show device/files/nosystem | search sys$input (name of perl script)
You need to find the documentation for undocumented VMS features :-)
Seriously I think that set watch might do what you want. If you issue
$ set watch file/class=(all,nodump)
$ perl yourperlscript.pl
You will get loads of output that will hopefully include what you want. I havent done it for years, you probably tune the options to fine tune it. See
http://www.parsec.com/openvms/undocumented.php?page=13
Jason, I need more clarification for a). Are you saying that you want to run your perl script in a batch file and have the batch file monitor the files being accessed by the perl script? Or something else?
Hmm, not sure about that. Maybe add a linux tag to your post so that some linux people can see this and chime in. I'm not sure why your perl program wouldn't know what files it opened. It's your program, wouldn't it access the files you told it to access? Or if you're computing the filenames somehow (which I've done in cobol, but still know at least which directory to find them in, and what naming scheme they use), you'd still have clues like what I mention. Also, since it's your program, and if you're computing the filenames, coudn't you also make your Perl program output it's own little report of what the files were? Like, just after it computes the filename, have it copy the name string to a separate report file.

Ensure a file is not changed while trying to remove it

In a POSIX environment, I want to remove a file from disk, but calculate its checksum before removing it, to make sure it was not changed. Is locking enough? Should I open it, unlink, calculate checksum, and then close it (so the OS can remove its inode)? Is there any way to ensure no other process has an open file descriptor on the file?
To give a bit of context, the code performs synchronization of files across hosts, and there's an opportunity for data loss if a remote host removes a file but the file is being changed locally.
Your proposal of open,unlink,checksum,close won't work as is, because you'll be stuck if the checksum doesn't match (there is no POSIX-portable way of creating a link to a file given by a file descriptor). A better variant is rename,checksum,unlink,close, which lets you undo the rename or redo the copy if the checksum doesn't match. You'll still need to think of what you want to do if a third program has recreated the file in the meantime.
POSIX offers only cooperative locks. If you have control over the programs that may modify the file, make sure they use locks; if that's not an option, you're stuck without locks.
There is no portable way to see what (or even whether) processes have opened a file. On most Unix systems, lsof will show you, but this is not universal, not robust (a program could open the files just after lsof has finished looking), and incomplete (if the files are exported over NFS, there may be no way to know about active clients).
You may benefit from looking at what other synchronization programs are doing, such as rsync and unison.