How to position dependent glyphs properly for indian text in pdf - pdf

The pdf generated by me with hindi text has dependent glyph separate from the base glyph. I need to make it readable. Pls help how to do. I have also read pdf document but am not able to locate how to work fr this.
Attached are the different streams and the sample output and the expected output.
I also need help in printing the other two characters in the image provided. Kindly help to resolve them.
Thank you
[Actual output ](https://i.stack.imgur.com/pEjYf.png) [Expected output ](https://i.stack.imgur.com/0D4xb.png) [Font descriptor object](https://i.stack.imgur.com/hZ0So.png) [Base font object ](https://i.stack.imgur.com/gJnV0.png) [Font object](https://i.stack.imgur.com/aBycC.png) [Content stream ](https://i.stack.imgur.com/V3kgc.png)

Related

Camelot in python does not behave as expected

I have two pdf documents, both in same layout with different information. The problem is:
I can read one perfectly but the other one the data is unrecognizable.
This is an example which I can read perfectly, download here:
from_pdf = camelot.read_pdf('2019_05_2.pdf', flavor='stream', strict=False)
df_pdf = from_pdf[0].df
camelot.plot(from_pdf[0], kind='text').show()
print(from_pdf[0].parsing_report)
This is the dataframe as expected:
This is an example which after I read, the information is unrecognizable, download here:
from_pdf = camelot.read_pdf('2020_04_2.pdf', flavor='stream', strict=False)
df_pdf = from_pdf[0].df
camelot.plot(from_pdf[0], kind='text').show()
print(from_pdf[0].parsing_report)
This is the dataframe with unrecognizable information:
I don't understand what I have done wrong and why the same code doesn't work for both files. I need some help, thanks.
The problem: malformed PDF
Simply, the problem is that your second PDF is malformed / corrupted. It doesn't contain correct font information, so it is impossible to extract text from your PDF as is. It is a known and difficult problem (see this question).
You can check this by trying to open the PDF with Google Docs.
Google Docs tries to extract the text and this is the result:.
Possible solutions
If you want to extract the text, you can print the document to an image-based PDF and perform an OCR text extraction.
However, Camelot does not currently support image-based PDFs, so it is not possible to extract the table.
If you have no way to recover a well-formed PDF, you could try this strategy:
print PDF to an image-based PDF
add a good text layer to your image-based PDF (using OCRmyPDF)
try using Camelot to extract tables

PDF toUnicode cmap table restore

I have multiple pdf files without 'toUnicode' cmap table. Absence of cmap table restricts me from copying the text from pdf files.
As far as I know, there is a possibility to add 'toUnicode' mapping in pdf file, but in my case adding static values is not an option, different files have different glyph codes.
So the question is the following. Is there any possibility to restore 'toUnicode' cmap table, perhaps with the help of Ghostscript, or are there any options at all?
Thanks.
No, you cannot add ToUnicode CMaps to an existing PDF file using Ghostscript.
In the general case, you can't do it at all, except manually. As you note in the question, different files will be constructed to use different character code->Glyph mappings, which means that the character code to Unicode mapping will also be different.
Since the character code selection is often based on the order in which glyphs are used in a file (so the fist glyph is character code 1, the second is character code 2 etc) you can see that there is no prospect of identifying a 'one size fits all' solution.
You could use some kind of OCR to scan the rendered output, identify each glyph and find the Unicode code point for it. Then you could construct a CMap by identifying the character code for the glyph and mapping it to the Unicode value.
You could, then, add the ToUnicode CMap to the PDF file, and update the Font Descriptor with the object number of the ToUnicode CMap.
Ghostscript won't do any of that for you, and I haven't heard of any tool which will.

Extract sections of PDF

I am trying to extract sections of a PDF file, for use in text analysis. I tried using pdfextract to accomplish this. However, a command such as
pdf-extract extract --regions --no-lines Bauer2010.pdf
only extract the (x,y) coordinates of a region, as in the example below.
<region x="226.32" y="750.47" width="165.57" height="6.37"
line_height="6.37" font="BGBFHO+AdvP4DF60E">Patient Education and
Counseling 79 (2010) 315-319</region>
Can sections of a PDF be extracted?
Have a look at http://text-analyzer.com where you can upload your PDF file and it will convert it into a format suitable for Natural Language Processing. Once converted into a text file it can then process the file, breaking it down into sentences with sentiment analysis. It has over 40 different types of sentence views where you can tag sections. Those tagged sentences can be exported.

What does an /ActualText of FEFF0009 mean in a PDF?

I've been looking into a PDF file to understand how it is built.
I noticed that InDesign has created PDFs with text as below (after decompression using pdftk).
0 Tc /Span<</ActualText<FEFF0009>>> BDC
4.018 -0.2 Td
( )Tj
I understand the role of ActualText (for copy/paste/searching) but I'm wondering exactly how I should be interpreting the FEFF0009. It looks like a UTF-16 string with BOM chars to represent a tab character. This seems incorrect as it's really a space. I'm wondering if there is a special meaning here?
.. This seems incorrect as it's really a space.
No, it's really a tab.
14.9.4 Replacement Text
NOTE 1: Just as alternate descriptions can be provided for images and other items that do not translate naturally into text (as described in the preceding sub-clause), replacement text can be specified for content that does translate into text but that is represented in a nonstandard way.
(PDF 32000-1:2008)
The PDF text engine does not support the concept of 'tabs'. In this case, InDesign mimicked the function of a tab character by inserting a space in the text stream, and it could set the space width to match the distance spanned by the original tab or use a large relative positioning for the rest of the text (which it did here: the horizontal displacement of 4.018 in your code snippet).
The general idea is that a space is rendered on the position of the tab, but when you copy this text and paste somewhere else you get a tab character. I suppose the 'space' is only inserted to have something to copy.

extract information from tables in truetype font file

While parsing a pdf file, my parser encounter a Tf operator with the value of the SubType entry in the font dictionary set to TrueType. The Encoding entry is not present, the symblic flag is set.
My question is : how do I suppose to map the character codes to characters with no encoding ?
The PDF reference section 5.5.5 Character Encoding states that TrueType font has internal data represented in tables in the font files. It seems that those tables would help me map the character codes. Am I getting it right ? How can I extract those information from the font file ?
The font file extracted from the PDF gave something like :
I read Apple's documentation The True Type Font File but still don't get how can I extract those informations from those tables.
Any help, links or reading suggestion would be greatly appreciated.
Symblic flag means that encoding is set to [0..255] range. Every character code must be in the this range. Font presents glyphs only for these codes.
Here is a great set of resources regarding TrueType and OpenType font formats.
You can use freetype library function FT_Get_Char_Index for going from a character code to a glyph index. See FT_Get_Char_Index
You'll have to dump the truetype font to file and load it with freetype to get an FT_Face first.