I have PDF-files with embedded OCR data. (So I already orcd them) So they are searchable. Now I want to extract this OCR data, because I want to put in in my tomcat6 searchserver. For doing this, I need the plain OCR data.
So my question is, is it possible to extract this embedded OCR-Data from the pdf Files?
It would be nice to get files with coordinates. But it would also be sufficient to get plaintext files.
You should be able to do this with iText or iTextsharp. iTextsharp has 0 documentation however, and a good number of the functions are not equivalent to those found in iText.
PDFSharp does not support iref streams. Those are pretty much the only comprehensive opensource solutions. If you do not mind paying, vista solutions may have something for you, they mostly handle workflow, but they have some pretty extensive pdf libraries as well.
Related
I have a question about generating PDFs with wkhtmltopdf. I know it's possible to use custom fonts in my html. But I think it's required that the operating system viewing the pdf has installed these fonts. Correct?
My question is whether it's possible to include these fonts in the PDF? So when the PDF is generated I can send it to a print office to print 50 copies. And they see the pdf exactly the same as I, without having these fonts installed.
This is certainly possible.
It's called "embedding a font" in pdf lingo.
Most pdf generation libraries should support this.
Pdf comes in different flavors (standards). One of the standards pdf/A is meant for long term storage (the A stands for archiving). The idea being that the document look and feel should be preserved as much as possible. In order to achieve this without depending on the operating system (and the fonts it may be shipped with), it is required that the fonts are embedded to fulfill the pdf/A standard.
https://en.wikipedia.org/wiki/PDF/A
I don't know how to do this in the library you are using. But I do know it's possible with iText.
This is a great tutorial on it, which aside from giving you more information about iText, will also illustrate the problem with custom fonts in a very visual way.
https://developers.itextpdf.com/tutorial/using-fonts-pdf-and-itext
Currently, I have a pdf that is not searchable and I am wondering what the best process is for preparing the file for coldfusion so I can index the file.
In particular, I am wondering whether a pdf file needs to be readable before using extracttext in cfpdf to pull the text from it.
I really appreciate the advice and I hope it helps other people who are interested in indexing pdf files with coldfusion.
I was considering extracting the text with Tesseract as suggested here
Performing Optical Character Recognition on PDF's from ColdFusion using a Java or .NET Library?
but if there is a built in feature in coldfusion, I would much rather use that and I think it would be more helpful to other people to know whether coldfusion can natively handle this task.
I am using tools such as PDFBox to interpret PDF files (including text, strokes, glyphs and images) and can access the streams and dictionaries. I am not clear on how these components link together and how to interpret them. In particular I would like to know how to access fonts from the streams.
NOTE: I am not interested in tutorials on how to create PDF documents
You probably should start from reading PDF Reference. It's a huge file but you might read only relevant parts.
To understand font streams you are basically need to read about TrueType and Type1 font formats (it's not an easy reading either). PDF may contain other font types but TrueType and Type1 are probably most widely used.
Fiddling with fonts might be complicated so you will probably find it easier to use some font library as FreeType for extracting information from PDF font streams.
There are lots of good article on planetpdf.com and many PDF developers run blogs with useful generic articles. We have run a whole load on our blog (http://www.jpedal.org/PDFblog/)
I'm currently using my scanner to turn my PDFs into searchable PDFs. The OCR is already taken care of, since I can use ctrl-f within the PDF.
How can I get at the OCR'd content from my program though.
I'm open to using java, ruby, the question is kind of programming language agnostic. Is the OCR'd text openly accessible by reading the file?
Not sure how your OCR software creates the PDF, but could you use a third-party library (jPedal) or tool such as iText or XPDF to extract the text from the resulting PDF?
What I need is to read pdf, make some transformations (generate TOC bookmarks) and write it back.
I found this http://hackage.haskell.org/package/HPDF , but it only mentions generating pdf, not the parsing (although I could have missed it)
Haskell is chosen purely for (self)educational purposes.
There are a few tools for PDF manipulation, though they seem to bias towards generation, rather than parsing:
http://johnmacfarlane.net/pandoc/
Pandoc is a great cross-markup library, but doesn't support PDF parsing (it does support PDF generation from a variety of formats).
There's also:
http://hackage.haskell.org/package/HsHaruPDF
http://hackage.haskell.org/package/pdf2line -- tool for extracting text from pdf
http://hackage.haskell.org/package/HPDF -- another pdf generation library
I'm not sure we have a good parsing tool yet.
Also as a learning exercise, I started a PDF parsing library in Haskell, but it's incomplete and has been languishing a bit from lack of attention. I'd be happy to share it with you, and would love feedback, improvements, etc. It's not currently hosted on hackage, but if you're interested in working with an incomplete implementation, let me know and I'll ask some colleagues for advice on getting it up there.
Here's a haskell binding to parts of xpdf:
http://hackage.haskell.org/package/pdf2line
Checkout pdf-toolbox library. It's support for PDF file generating is low level, but powerful enough for your task.
Here is an example how to change title of an existing PDF file using incremental update feature.
Another package to consider is rakhana which is also on hackage.