I have some code which is parsing the $MFT on an NTFS disk.
All works perfectly, except that a handful of records (roughly 10 out of 60000) return incorrect characters in the file name. See the screenshot below:
Note the Unicode character defined by byte '0E'. In all other applications, this is an underscore character. See below:
Even in the $INDEX_ROOT attribute of the containing directory, it has the correct name:
Am I reading the $FILE_NAME attribute wrong? Or should I ignore what's there and always use the name from the $INDEX_ROOT attribute of the directory instead? This seems a bit backwards?
Note: it isn't always '0E', and isn't always this file name, but seems to always be only one character which is wrong in each 'bad' record.
For anyone in the future, I stumbled across the answer while reading this link:
The fixup array starts at offset 0x30. The first two bytes (0x 8c 06) are the last two bytes in every sector of the record. The real last couple of bytes in all the sectors are stored in the fixup array that follows, namely all zeroes.
Noting that your values will be different, but that you'll notice your 'bad' file names are present whenever the filename attribute spans across a sector boundary (as in the above screenshots from WinHex). Once the end of sector bytes are replaced with the relevant fixup bytes, the filenames are then correct.
Related
Hello there Stackoverflow, I've been tasked with making a flat file schema as well as a map, however, our specifications are that there are 3 fields,
----------
Name Length
----------
TIdentity 2
OIdentity 17
Result 2
However, the file that we receive is 500(ish) characters long, is there a way to make it ignore the remaning empty characters??
Thanks for any help you guys might be able to supply
You should definitely ensure the spec and sample files are correct (particularly that the spec contains any whitespace requirements/options), but assuming they are and you're just supposed to ignore the whitespace, you can create node to stuff the whitespace into and just ignore it.
Without knowing a bit more about your requirements, it's hard to say exactly how this should work. If the whitespace is always a fixed length, make a node that expects that many characters. If it's not always a fixed length, you may have to make a repeating node that's one character long but not the record terminator (presumably CR/LF or something of the like). If the whitespace itself is the delimiter, you might be able to do something with the ignore_trailing_delimiter on the record.
Worst case scenario (whitespace is variable, you can't control the partner who sends it to you, and you can't get the FFDASM to sensibly deal with it), write a custom Decode component to preprocess the file and remove the extraneous whitespace.
I'm reading PDF specs and I have a few questions about the structure it has.
First of all, the file signature is %PDF-n.m (8 bytes).
After that the docs says there might be at least 4 bytes of binary data (but there also might not be any). The docs don't say how many binary bytes there could be, so that is my first question. If I was trying to parse a PDF file, how should I parse that part? How would I know how many binary bytes (if any) where placed in there? Where should I stop parsing?
After that, there should be a body, a xref table and a trailer and an %%EOF.
What could be the minimal file size of a PDF, assuming there isn't anything at all (no objects, whatsoever) in the PDF file and assuming the file doesn't contain the optional binary bytes section at the beginning?
Third and last question: If there were more than one body+xref+trailer sections, where would be offset just before the %%EOF be pointing to? The first or the last xref table?
First of all, the file signature is %PDF-n.m (8 bytes). After that the docs says there might be at least 4 bytes of binary data (but there also might not be any). The docs don't say how many binary bytes there could be, so that is my first question. If I was trying to parse a PDF file, how should I parse that part? How would I know how many binary bytes (if any) where placed in there? Where should I stop parsing?
Which docs do you have? The PDF specification ISO 32000-1 says:
If a PDF file contains binary data, as most do (see 7.2, "Lexical Conventions"), the header line shall be
immediately followed by a comment line containing at least four binary characters—that is, characters whose
codes are 128 or greater.
Thus, those at least 4 bytes of binary data are not immediately following the file signature without any structure but they are on a comment line! This implies that they are
preceded by a % (which starts a comment, i.e. data you have to ignore while parsing anyways) and
followed by an end-of-line, i.e. CR, LF, or CR LF.
So it is easy to recognize while parsing. In particular it merely is a special case of a comment line and nothing to treat specially.
(sigh, I just saw you and #Jongware cleared that in comments while I wrote this...)
What could be the minimal file size of a PDF, assuming there isn't anything at all (no objects, whatsoever) in the PDF file and assuming the file doesn't contain the optional binary bytes section at the beginning?
If there are no objects, you don't have a PDF file as certain objects are required in a PDF file, in particular the catalog. So do you mean a minimal valid PDF file?
As you commented you indeed mean a minimal valid PDF.
Please have a look at the question What is the smallest possible valid PDF? on stackoverflow, there are some attempts to create minimal PDFs adhering more or less strictly to the specification. Reading e.g. #plinth's answer you will see stuff that is not PDF anymore but still accepted by Adobe Reader.
Third and last question: If there were more than one body+xref+trailer sections, where would be offset just before the %%EOF be pointing to?
Normally it would be the last cross reference table/stream as the usual use case is
you start with a PDF which has but one cross reference section;
you append an incremental update with a cross reference section pointing to the original as previous, and the new offset before %%EOF points to that new cross reference;
you append yet another incremental update with a cross reference section pointing to the cross references from the first update as previous, and the new offset before %%EOF points to that newest cross reference;
etc...
The exception is the case of linearized documents in which the offset before the %%EOF points to the initial cross references which in turn point to the section at the end of the file as previous. For details cf. Annex F of ISO 32000-1.
And as you can of course apply incremental updates to a linearized document, you can have mixed forms.
In general it is best for a parser to be able to parse any order of partial cross references. And don't forget, there are not only cross reference sections but also alternatively cross reference streams.
I am trying to filter files using FILE_MASK parameter in EPS2_GET_DIRECTORY_LISTING to reduce time searching all files in the folder (has thousands of files).
File mask I tried:
TK5_*20150811*
file name in the folder is;
TK5_Invoic_828243P_20150811111946364.xml.asc
But it exports all files to DIR_LIST table, so nothing filtered.
But when I try with;
TK5_Invoic*20150811*
It works!
What I think is it works if I give first 10 characters as it is. But in my case I do not have first 10 characters always.
Can you give me an advice on using FILE_MASK?
Haven’t tried, but this sounds plausible:
https://archive.sap.com/discussions/thread/3470593
The * wildcard may only be used at the end of the search string.
It is not specified, what a '*' matches to, when it is not the last non-space character in the FILE parameter value.
I have a formatted data file which is typically billions of lines long, with several lines of headers of variable length. The data file takes the form:
# header 1
# header 2
# headers are of variable length.
# data begins from next line.
1.23 4.56 7.89 0.12
2.34 5.67 8.90 1.23
:
:
# billions of lines of data, each row the same length, same format.
-- end of file --
I would like to extract a portion of data from this file, and my current code looks like:
<pre>
do j=1,jmax !Suppose I want to extract jmax lines of data from the file.
[algorithm to determine number of lines to skip, "N(j)"]
!This determines the number of lines to skip from the previous file
!position, when the data was read on j-1th iteration.
!Skip N-1 lines to go to the next data line to read off:
do i=1,N-1
read(unit=unit,fmt='(A)')
end do
!Now read off the line of data I want:
read(unit=unit,fmt='(data_format)'),data1,data2,etc.
!Data is stored in some arrays.
end do
</pre>
The problem is, N(j) can be anywhere between 1 and several billion, so it takes some time to run the code.
My question is, is there a more efficient way of skipping millions of lines of data? The only way I can think of, while sticking to Fortran, is to open the file with direct access and jump to the desired line upon opening the file.
As you suggest, direct access seems like the best option. But that requires the records to all have the same length, which your headers violate. Also, why used formatted output? With a file of this length, its hard to imagine a person reading the file. If you use unformatted IO, the file will be both smaller and IO will be faster. Perhaps create two files, one with the headers (metadata) in human reader form, and the other with the data in native form. Native / binary representation means a loss of portability, which is something to consider if you want to move the files to different computer architectures or have them be useable for decades. Otherwise it's probably worth the convenience. Other options would be to use a more sophisticated file format that combines metadata and data, such as HDF5 or FITS, but for communication between two programs of one person, that's probably excessive.
I am trying to read a binary file with Python. This is the code I use:
fb = open(Bin_File, "r")
a = numpy.fromfile(fb, dtype=numpy.float32)
However, I get zero values at the end of the array. For example, for a case where nrows=296 and ncol=439 and as a result, len(a)=296*439, I get zero values for a[-922:]. I know these values should be noData (-9999 in this example) from a trusted piece of code in R. Does anybody know why I am getting these non-sense zeros?
P.S: I am not sure it is related on not, but len(a) is nrows*ncols+2! I have to get rid of these two using a = a[0:-2] so that when I reshape them into rows and columns using a_reshape = a.reshape(nrows, ncols) I don't get an error.
When opening a file for reading as binary you should use the mode "rb" instead of "r".
Here is some background from the docs. On linux machines you don't need the "b" but it wont hurt. On Windows machines you must use "rb" for binary files.
Also note that the two extra entries you're getting is a common bug/feature when using the "unformatted" binary output format of Fortran. Each write statement given in this mode will produce a record that is surrounded by two 4 byte blocks.
These blocks represent integers that list the number of bytes in the block of unformatted data. For example, [223] [223 bytes of data] [223].