How to force HotSpot JVM to overwrite heap dump file? - jvm

I am dumping heap (OpenJDK 7) on OOM with
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/jvm.hprof
startup opts for my VM. I dont have enough space on the disk and cant afford to store multiple dumps (heap size is 6g). Is there a way to force JVM to overwrite the dump file? Currently it will complain about 'file exists' and leave the existing dump intact. I have read Sun's docs but there doesnt seem to be any option to force overwrite

No, there is no way to overwrite the file.
The relevant code is here: http://hg.openjdk.java.net/jdk/jdk/file/1ae823617395/src/hotspot/share/services/heapDumper.cpp#l465

Related

How to analyze crash dump of EXE/DLL files which were protected by vmprotect

I am confused about how people analyze crash dump files generated by exe/dll files which were protected by VMProtect(3.0 or later). Even though I have the original exe/dll, the map file, and the pdb file, I can not find the original call stack or the crash point in the C++ source code. Does anyone know how to analyze these dump files? I'v got a huge amount of dump files to be handle...
You can use "MiniDump Fixer" to fix minidump files:
https://vmpsoft.com/20111114/minidump-fixer/

Binary open file in c++/cli or c#

I have problem with Binary open file in c++/cli. How to open the whole file and save the contest of the file to the array ^.
A typical method for reading the contest [sic] of a Binarny [sic] file is to:
1. Determine the length of the file.
2. Allocate dynamic memory for the file.
3. Block read the file, in binary mode, into memory.
Some operating systems may have a memory map capability. This allows a file to be treated as an array. The OS is in charge of reading the file into memory. It may read the entire file, or it may read pages as necessary (on demand).
See std::ifstream::read, std::ifstream::seekg and std::ifstream::tellg.

How to get the file back after using "unlink" in R?

I accidentally deleted some of my useful files. The files were deleted and I could not find them in recycle bin. I want to know how can I get it back?
I am using windows 8.1. All the files in My documents deleted using unlink in R. I try to using R-delete to recover, but it only can recover the file deleted from recycle bin not unlink using R.
Thank you.
Though not being an expert of R, I assume that your file has been unlinked at the file-system level. You can't expect finding it in the recycle bin of your operating system. If it is very important, the only real solution is:
stop immediately doing anything with your computer;
take the time reading and understanding from another computer
try accessing your hard drive (or whatever) from another mounted filesystem/operating-system (boot with a USB stick for instance)
use some undelete tool adapted to your filesystem.
You don't tell about your operating system and OS; maybe there will be some tool usable from the mounted filesystem and it may be easier; but anyway, don't use your computer too much before doing it...

Ensure a file is not changed while trying to remove it

In a POSIX environment, I want to remove a file from disk, but calculate its checksum before removing it, to make sure it was not changed. Is locking enough? Should I open it, unlink, calculate checksum, and then close it (so the OS can remove its inode)? Is there any way to ensure no other process has an open file descriptor on the file?
To give a bit of context, the code performs synchronization of files across hosts, and there's an opportunity for data loss if a remote host removes a file but the file is being changed locally.
Your proposal of open,unlink,checksum,close won't work as is, because you'll be stuck if the checksum doesn't match (there is no POSIX-portable way of creating a link to a file given by a file descriptor). A better variant is rename,checksum,unlink,close, which lets you undo the rename or redo the copy if the checksum doesn't match. You'll still need to think of what you want to do if a third program has recreated the file in the meantime.
POSIX offers only cooperative locks. If you have control over the programs that may modify the file, make sure they use locks; if that's not an option, you're stuck without locks.
There is no portable way to see what (or even whether) processes have opened a file. On most Unix systems, lsof will show you, but this is not universal, not robust (a program could open the files just after lsof has finished looking), and incomplete (if the files are exported over NFS, there may be no way to know about active clients).
You may benefit from looking at what other synchronization programs are doing, such as rsync and unison.

How do you reduce the size of a folder's index file in NTFS?

I have a folder in NTFS that contains tens of thousands of files. I've deleted all files in that folder, save 1. I ran contig.exe to defragment that folder so now it's in 1 fragment only. However, the size of that folder is still 8MB in size. This implies that there's a lot of gap in the index. Why is that? If I delete that one file, the size of the index automatically goes to zero. My guess is because it gets collapsed into the MFT. Is there any way to get NTFS to truly defragment the index file by defragmenting it based on the content of the file? Any API that you're aware of? Contig.exe only defragment the physical file.
I guess this is one way in which NTFS is just like almost every other FS - none of them seem to like shrinking directories.
So you should apply a high-tech method that involves using that advanced language, "BAT" :)
collapse.bat
REM Invoke as "collapse dirname"
ren dirname dirname.old
mkdir dirname
cd dirname.old
move * ../dirname/
cd ..
rmdir dirname.old
There is slack in the index, but not a gap. I make the distinction to imply that there is technically wasted space, but it's not like NTFS has to parse the 8MB in order to enumerate/query/whatever the index. It knows where the root of its tree is, and it just happens to have a lot of extra allocation leftover. Probably too detailed a response, given how unhelpful it is.
Fragmentation is likely a separate issue altogether.
Take a look at the accepted answer to this question: NTFS performance and large volumes of files and directories
The author provided some otherwise undocumented information about file index fragmentation, which he received from Microsoft Tech Support during an incident. The short version is, DEFRAG does not defragment the folder index, only the files in that folder. If you want to defragment the file index, you have to use SysInternals' CONTIG tool, which is now owned and distributed (free) by Microsoft. The answer gives a link to CONTIG.