PDF 1.3 text conversion with escape sequence and character code - pdf

i got an PDF 1.3 where I want to extract the text.
But in the stream there are 2 different types of text.
Some plain text and some character coded text with escape sequences.
Here an example:
/TextClip BMC
BT
/T1_2 1 Tf
0 Tc 0 Tw 7 Tr 16.2626 0 0 16.2626 37.2512 581.738 Tm
(Test Test)Tj
ET
EMC
q
/GS0 gs
67.6799985 0 0 -13.4399997 37.439994 594.2399583 cm
/Im47 Do
Q
Q
Q
q
37.499 569.52 179.713 8.34 re
W n
q
/GS0 gs
180.959996 0 0 -9.5999998 36.959999 578.3999755 cm
/Im48 Do
Q
Q
q
37.499 569.52 179.713 8.34 re
W n
q
/TextClip BMC
BT
0 Tc 0 Tw 7 Tr 9.899 0 0 9.899 37.2512 569.7178 Tm
[(\000E\000V\000d\000e\000\003\000E\000V\000d\000e)]TJ
ET
EMC
In this example ther are 2 times the text "Test Test". One time as plan text and the other time with the escape sequence \000E\000V\000d\000e\000\003\000E\000V\000d\000e.
I only knew, if there are after an escape sequence 3 digits, that this is an octal character code. But in my example there are some time 4 and some times 3 digits.
The 4 character after the escape sequence is at 15 next to the correct ascii code. (\000E is character "T") But what is the correct conversion?
The text block \000\003 should be a space sign. What is there the conversion hack?
Regards

The encoding of the string arguments of text showing instructions like TJ and Tj depends on the PDF font in question, cf. the specification
A string operand of a text-showing operator shall be interpreted as a sequence of character codes identifying the glyphs to be painted.
With a simple font, each byte of the string shall be treated as a separate character code. The character code shall then be looked up in the font’s encoding to select the glyph, as described in 9.6.6, "Character Encoding".
With a composite font (PDF 1.2), multiple-byte codes may be used to select glyphs. In this instance, one or more consecutive bytes of the string shall be treated as a single character code. The code lengths and the mappings from codes to glyphs are defined in a data structure called a CMap, described in 9.7, "Composite Fonts".
(section 9.4.3 - Text-Showing Operators - in ISO 32000-1)
The font used for the first text showing operation
(Test Test)Tj
probably is a simple font with an ASCII'ish encoding, probably WinAnsiEncoding. The font itself is selected two lines above in
/T1_2 1 Tf
so you only have to look up the font resource T1_2 the associated resources (the resources of the page if you are showing us an excerpt of a page content stream) to verify.
The font used in the second text showing operation
[(\000E\000V\000d\000e\000\003\000E\000V\000d\000e)]TJ
appears to be a composite font with a double-byte encoding, probably Identity-H, and the underlying font program appears to have the glyph codes most often found in TrueType fonts. You should look for a ToUnicode mapping in that PDF font for easy decoding.
The instruction in which this font is selected, is not among the instructions you posted but instead must be somewhere above. This selection has been saved as part of the graphics state (in some early q instructions) and restored again (in some Q instruction between the two text showing instructions you shared).
if there are after an escape sequence 3 digits, that this is an octal character code. But in my example there are some time 4 and some times 3 digits.
No, in your example there always are escape sequences with three octal digits. The character thereafter is a separate byte, i.e. you have the bytes '\000', 'E', '\000', 'V', '\000', 'd', '\000', 'e', '\000', '\003', '\000', 'E', '\000', 'V', '\000', 'd', '\000', and 'e'.
As mentioned above, this looks like a double-byte encoding with in particular the mappings
\000E -> 'T'
\000V -> 'e'
\000d -> 's'
\000e -> 't'
\000\003 -> ' ' (space)
This appears to be a glyph encoding often found in TrueType fonts which for Latin letters merely means a constant offset to their Unicode codes.
But there also are many different multi-byte encodings in common use, sometimes they even are ad-hoc encodings only created for the font on the page at hands.
Thus, if you seriously want to do text extraction from PDFs, you really have to study the PDF specification and implement along its requirements instead of hoping for some conversion hack.
Adobe has published a copy of the old PDF specification ISO 32000-1 on their web page at https://www.adobe.com/content/dam/acom/en/devnet/pdf/pdfs/PDF32000_2008.pdf

Related

PDF Annotation containing Unicode characters(spanning two bytes) are not showing in firefox but working fine in chrome

Setting unicode characters in the Annotation appearance stream using arial unicode is showing up the characters correctly in chrome but not in firefox. Any Idea on this? The annotation appearance stream is as below. For example, to show a tick symbol.
BT /F3 34 Tf 1.0 0.0 0.0 rg 107.44528 635.27405 Td [
<FEFF27132713>
] TJ ET
Most likely your content stream is invalid.
If I understand you correctly, you want to control the encoding of a string parameter of a text showing instruction in a PDF content stream by prefixing it with a Unicode BOM. This does not work:
A string operand of a text-showing operator shall be interpreted as a sequence of character codes identifying the glyphs to be painted.
With a simple font, each byte of the string shall be treated as a separate character code. The character code shall then be looked up in the font’s encoding to select the glyph, as described in 9.6.5, "Character encoding".
With a composite font (PDF 1.2), multiple-byte codes may be used to select glyphs. In this instance, one or more consecutive bytes of the string shall be treated as a single character code. The code lengths and the mappings from codes to glyphs are defined in a data structure called a CMap, described in 9.7, "Composite fonts".
(ISO 32000-2, section 9.4.3 "Text-showing operators")
In case of your example, therefore, that F3 font
either is a simple font with some single-byte encoding and your <FEFF27132713> string contains 6 separate character codes, each of them representing a glyph by itself if any,
or is a composite font possibly with a multi-byte encoding and your <FEFF27132713> string contains up to 6 separate character codes, each of them representing a glyph by itself if any.
In either case the interpretation of your string depends on a fixed encoding defined by the font object in question, you cannot manipulate it by some BOM prefix.

Decoding a FlateDecoded section of text in a PDF document

Using peepdf I am analyzing two simple pdf files. Both files contain a single line of text ("ZYXWVUTSRQQRSTUVWXYZ") and were created on Mac OS X.
The first file was created with TextEdit. There are only three streams, and looking at the first one (automatically decoded with peepdf) shows the text clearly.
PPDF> stream 4
q Q q 72 707.272 468 12.72803 re W n /Cs1 cs 0 sc q 0.9790795 0 0 -0.9790795 72 720
cm BT 0.0001 Tc 11 0 0 -11 5 10 Tm /TT1 1 Tf (ZYXWVUTSRQQRSTUVWXYZ) Tj ET
Q Q
The second file was created with MS Word. There are four streams but the decoded text is no where to be found. Looking at the corresponding stream in the Word doc does not reveal the decoded string:
PPDF> stream 4
q Q q 18 40 576 734 re W n /Cs1 cs 0 0 0 sc q 0.24 0 0 0.24 90 708.72 cm BT
-0.0004 Tc 50 0 0 50 0 0 Tm /TT2 1 Tf [ (!") -1 (#) -1 ($) -1 (%&'\() -1 (\))
-1 (*) -1 (*) -1 (\)) -1 (\() -1 ('&%$) -1 (#) -1 (") -1 (!) ] TJ ET Q q 0.24 0 0 0.24 239.168 708.72
cm BT 50 0 0 50 0 0 Tm /TT2 1 Tf (+) Tj ET Q Q
It's not apparent to me where the string is in the file or what the information in this stream means. Any insights?
It's not apparent to me where the string is in the file
In general you won't see the clear text in the content stream because the encoding used there needs not be a standard encoding, nothing ASCII'ish.
[ (!") -1 (#) -1 ($) -1 (%&'\() -1 (\)) -1 (*) -1 (*) -1 (\)) -1 (\() -1 ('&%$) -1 (#) -1 (") -1 (!) ] TJ
This operation in its array operand contains your ZYXWVUTSRQQRSTUVWXYZ with some kerning corrections for certain pairs of characters.
It looks like an ad hoc encoding using the bytes from 33 (= 0x21 = '!') onwards. '!' is used for the first glyph needed, the Z, '"' for the second one needed Y, '#' for the third one X, etc. Your test string not only starts with these chars but also ends with them, and so does the array above, (!") -1 (#) ... (#) -1 (") -1 (!).
Inspect the definition of the font used (TT2). It may (or may not) include information helping you decoding this encoding.
or what the information in this stream means. Any insights?
To understand the contents of PDF content streams, you should read the relevant sections of the PDF specification ISO 32000-1, especially chapters 8 Graphics and 9 Text.
As your question is focused on the recognition of text content, e.g. read section 9.10.2 Mapping Character Codes to Unicode Values:
A conforming reader can use these methods, in the priority given, to map a character code to a Unicode value. Tagged PDF documents, in particular, shall provide at least one of these methods (see 14.8.2.4.2, "Unicode Mapping in Tagged PDF"):
If the font dictionary contains a ToUnicode CMap (see 9.10.3, "ToUnicode CMaps"), use that CMap to convert the character code to Unicode.
If the font is a simple font that uses one of the predefined encodings MacRomanEncoding, MacExpertEncoding, or WinAnsiEncoding, or that has an encoding whose Differences array includes only character names taken from the Adobe standard Latin character set and the set of named characters in the Symbol font (see Annex D):
a) Map the character code to a character name according to Table D.1 and the font’s Differences array.
b) Look up the character name in the Adobe Glyph List (see the Bibliography) to obtain the corresponding Unicode value.
If the font is a composite font that uses one of the predefined CMaps listed in Table 118 (except Identity–H and Identity–V) or whose descendant CIDFont uses the Adobe-GB1, Adobe-CNS1, Adobe-Japan1, or Adobe-Korea1 character collection:
a) Map the character code to a character identifier (CID) according to the font’s CMap.
b) Obtain the registry and ordering of the character collection used by the font’s CMap (for example, Adobe and Japan1) from its CIDSystemInfo dictionary.
c) Construct a second CMap name by concatenating the registry and ordering obtained in step (b) in the format registry–ordering–UCS2 (for example, Adobe–Japan1–UCS2).
d) Obtain the CMap with the name constructed in step (c) (available from the ASN Web site; see the Bibliography).
e) Map the CID obtained in step (a) according to the CMap obtained in step (d), producing a Unicode value.
NOTE Type 0 fonts whose descendant CIDFonts use the Adobe-GB1, Adobe-CNS1, Adobe-Japan1, or Adobe-Korea1 character collection (as specified in the CIDSystemInfo dictionary) shall have a supplement number corresponding to the version of PDF supported by the conforming reader. See Table 3 for a list of the character collections corresponding to a given PDF version. (Other supplements of these character collections can be used, but if the supplement is higher-numbered than the one corresponding to the supported PDF version, only the CIDs in the latter supplement are considered to be standard CIDs.)
If these methods fail to produce a Unicode value, there is no way to determine what the character code represents in which case a conforming reader may choose a character code of their choosing.
Edit: Concerning the comment
One of the objects gave some font info. It is 'JJOWGO+Cambria' and references object 16 as the 'font file' which was also unreadable. I'll review the manual. Can't find anything online about 'JJOWGO'.
You wont find anything specific about JJOWGO because it most likely is a random key sequence prefixed to Cambria to indicate that not all of that font is embedded but only a subset. Cf. section 9.6.4 Font Subsets of ISO 32000-1:
PDF documents may include subsets of Type 1 and TrueType fonts. The font and font descriptor that describe a font subset are slightly different from those of ordinary fonts. These differences allow a conforming reader to recognize font subsets and to merge documents containing different subsets of the same font. (For more information on font descriptors, see 9.8, "Font Descriptors".)
For a font subset, the PostScript name of the font—the value of the font’s BaseFont entry and the font descriptor’s FontName entry— shall begin with a tag followed by a plus sign (+). The tag shall consist of exactly six uppercase letters; the choice of letters is arbitrary, but different subsets in the same PDF file shall have different tags.
EXAMPLE EOODIA+Poetica is the name of a subset of Poetica®, a Type 1 font.
<<
/FontBBox [ -1475 -2463 2867 3117 ]
/StemV 0
/FontFile2 16 0 R
/Descent -222
/XHeight 467
/Flags 4
/Ascent 950
/FontName /JJOWGO+Cambria
/Type /FontDescriptor
/ItalicAngle 0
/AvgWidth 615
/MaxWidth 2919
/CapHeight 667
>>
This font descriptor contains no obvious encoding information. Have a look at the actual Font dictionary and look for a ToUnicode entry, cf. the quotation of section 9.10.2 above.
Comments from #mkl made it clear what is happening. The text in the pdf produced by MS Word was using a character map.
I tracked down the font dictionary by searching for objects with a ToUnicode entry:
<< /FirstChar 33
/Widths [ 538 570 571 921 604 648 593 496 621 653 220 ]
/Type /Font
/BaseFont /JJOWGO+Cambria
/LastChar 43
/Subtype /TrueType
/FontDescriptor 13 0 R
/ToUnicode 14 0 R >>
The ToUnicode entry referenced object 14, so I looked at that next:
/CIDInit /ProcSet findresource begin
12 dict begin
begincmap
/CIDSystemInfo <<
/Registry (Adobe)
/Ordering (UCS)
/Supplement 0
>> def
/CMapName /Adobe-Identity-UCS def
/CMapType 2 def
1 begincodespacerange
<00><FF>
endcodespacerange
1 beginbfchar
<2b><0009 000d 0020 00a0>
endbfchar
10 beginbfrange
<21><21><005a>
<22><22><0059>
<23><23><0058>
<24><24><0057>
<25><25><0056>
<26><26><0055>
<27><27><0054>
<28><28><0053>
<29><29><0052>
<2a><2a><0051>
endbfrange
endcmap
CMapName currentdict /CMap defineresource pop
end
end
Section 9.10.3 of ISO 32000-1 explains how beginbfrange maps character ranges to each other. Ranges of character codes are mapped to Unicode values. The "range" 21-21 contains a single character, which is "!". It is mapped to U+005a ("Z"). The mapping contains a line for every character in my test document, from Z to Q. (! to *)

How is kerning encoded on embedded Adobe Type 1 fonts in PDF files?

Adobe PDF reference talks about /Widths array and /FontFile stream, but Adobe Type 1 font programs (.pfb or .pfa files) don't include font metrics; they are included in font metric files (.afm or .pfm files) but these are not embedded in PDF file.
PDF can just encode char width metrics or it can encode kerning pair too? How?
If you study the section 9.4.4 of the PDF specification ISO 32000-1 (see below), you'll see that no special kerning information (e.g. extracted from the font program) are included in the calculation of the glyph displacement.
You'll also see, though, that there is a Tj value which denotes a number in a TJ array, if any, which specifies a position adjustment. This value is used to implement kerning.
E.g. that phrase "denotes a number in a TJ array, if any, which specifies a position adjustment" from the specification itself is set as:
[( de)-5.5(no)-5.5(te)-5.5(s a nu)-5.5(m)-5.7(b).5(e)-5.5(r).3( in a )]TJ
...
You see for example kerning applied in denotes between 'e' and 'n', 'o' and 't', and 'e' and 's'.
The section from the specification:

write in unicode text on visible signature - pdfbox

I'we build PDF, using PDFBox. I've visible signature too. I write some text like that:
...
builderSting.append("Tm\n");
builderSting.append(" /F1 " + fontSize + "\n");
builderSting.append("Tf\n");
builderSting.append("(hello world)");
builderSting.append("Tj\n");
builderSting.append("ET");
...
PDStream stream= ...;
stream.createOutputStream().write(builder.toString().getBytes("ISO-8859-1"));
everything works well. but if I write some unicode characters in builderString, there is "???"s instead of text.
that's sample PDF: link here
QUESTION 1) when I see PDF structure , there is Question-Marks instead of text. Yes. and I dont know how to write with unicode characters?
9 0 obj
<<
/Type /XObject
/Subtype /Form
/BBox [100 50 0 0]
/Matrix [1 0 0 1 0 0]
/Resources <<
/Font 11 0 R
/XObject <<
/img0 12 0 R
>>
/ProcSet [/PDF /Text /ImageB /ImageC /ImageI]
>>
/FormType 1
/Length 13 0 R
>>
stream
q 93.70079 0 0 50 0 0 cm /img0 Do Q
BT
1 0 0 1 93.70079 25 Tm
/F1 2
Tf
(????)Tj
ET
endstream
endobj
I've font with Encoding WinAsciEncoding. can i use another encoding in pdfbox?
PDFont font = PDTrueTypeFont.loadTTF(template, new File("//fontName.ttf"));
font.setFontEncoding(new WinAnsiEncoding());
QUESTION 2) I 've embedded font in PDF. but text is not written with this font (in visible singature Rectangle). Why?
Question 3) when I remove font, text was still there (when the text was in english). what is the default font? /F1 - which is is 1st font?
Question 4) How to calculate width of my text in visible signature ? Any ideas?
QUESTION 1) when I see PDF structure , there is Question-Marks instead of text. Yes. and I dont know how to write with unicode characters?
I assume that with unicode characters you mean characters present in Unicode but not in e.g. Latin-1. (Because the letter 'a' for example does have a Unicode representation, too, but most likely won't cause you trouble.)
You call getBytes("ISO-8859-1") on your StringBuilder result. Your unicode characters most likely are not in ISO 8859-1. Thus, String.getBytes returns the ASCII code for a question mark in their respective place.
If the question was merely how to write to an output stream with unicode characters in Java, the answer would be easy: Choose an encoding which contains all you characters, e.g. UTF-8, which all consumers of your program support, and call String.getBytes for that encoding.
The case at hand is different, though, as you want to serialize those information as a PDF form xobject stream. In this context your whole approach is somewhere along the route from highly questionable to completely wrong:
In PDFs, each font might come along with its own encoding which might be similar to a common encoding, e.g. /WinAnsiEncoding, or completely custom. These encodings, furthermore, in many cases are restricted to one byte per character, but in case of composite fonts they can also be multi-byte-encodings.
As a corollary, not all elements of the stream elements need to be encoded using the same encoding. E.g. the operator names Tm, Tf, and Tj are encoded using their ASCII codes while the characters of a string to be displayed have to be encoded using the respective font's encoding (and may thereafter be yet again hex-encoded if added in sharp brackets <>).
Thus, creating the stream as a string and then converting them to bytes with a single encoding only works if all used fonts use the same encoding (for the actually used code points) which furthermore needs to be ASCII'ish to correctly represent the operators.
Essentially, you should directly construct the stream in some byte buffer and for each inserted element use the appropriate encoding. In case of characters to be displayed, therefore, you have to be aware of the encoding used by the currently selected font.
If you want to do it right, first study the PDF specification ISO 32000-1, especially the sections on general syntax and chapter 9 Text.
QUESTION 2) I've embedded font in PDF. but text is not written with this font (in visible signature Rectangle). Why?
In the resources of the stream xobject in question there is exactly one embedded font associated to the name /F0. In your stream, though, you have /F1 2 Tf, i.e. you select a font /F1 at size 2.
Question 3) when I remove font, text was still there (when the text was in english). what is the default font?
According to the specification, section 9.3.1,
font shall be the name of a font resource in the Font subdictionary of the current
resource dictionary [...]
There is no initial value for either font or size
Most likely, though, PDF viewers for the sake of compatibility with old or broken documents use some default font.
Question 4) How to calculate width of my text in visible signature ? Any ideas?
The widths obviously depends on the metrics of the font used (glyph widths in this case) and the graphics state you set (font size, character spacing, word spacing, current transformation matrix, text transformation matrix, ...).
In your case you hardly do anything in the graphics state and, therefore, only the selected font size from it is of interest. so the more interesting part are the character widths from the font metrics. As long as you use the standard 14 fonts, you find the metrics here. As soon as you start using other, custom fonts, you have to read them from the font definition files yourself.
Ad 1)
Could it be that
stream.createOutputStream().write(builder.toString().getBytes("ISO-8859-1"));
should be
stream.createOutputStream().write(builderString.toString().getBytes("UTF-8"));
The conversion in getBytes to ISO-8859-1 would make some special characters missing in ISO-8859-1 a ?.

Programmatic extraction of Unicode character values from True type font file in C/C++

I am trying to extract the UTF-8 character value from an embedded true type font file contained in a PDF. Is anyone aware of a method of doing this? The values in the PDF might be something like '2%dd! w!|<~' and this would end up as 'Hello World' in the PDF represented by the corresponding glyphs from the TTF. I'd like to be able to extract the wchar values here. Is this possible? Does the UTF-8 value for each character exist in the TTF?
Glyph ID's do not always correspond to Unicode character values - especially with non latin scripts that use a lot of ligatures and variant glyph forms where there is not a one-to-one correspondance between glyphs and characters.
Only Tagged PDF files store the Unicode text - otherwise you may have to reconstruct the characters from the glyph names in the fonts. This is possible if the fonts used have glyphs named according to Adobe's Glyph Naming Convention or Adobe Glyph List Specification - but many fonts, including the standard Windows fonts, don't follow this naming convention.
UTF-8 is an encoding that allows UTF8 encoded streams to be decoded to reveal a sequence of unicode char points. In any case, PDF does not encode using UTF-8. For true type text, each glyph is encoded using 8 bits.
To decode:
Read the differences array and encoding from the font definition
Read 8 bits at a time and produce an "AdobeGlyphId" using the encoding and differences array read in step 1.
Use the adobe glyph id to look up the unicode value
This is detailed in section 9.10 of the PDF Specification