Is it possible to obfuscate the bytes that are visible when a PDF file is opened with a hex editor? Also, I wonder if there is any problem in viewing the contents of the PDF file even if it is obfuscated.
You will always be able to see whatever bytes are within a file using a hex editor.
There might be ways to generate your pdf pages using methods that don't involve directly writing the text into the pdf (for example using javascript that's obfuscated).
Like answered above, the bytes of the file are always visible when being viewed with a hex-editor. However there are some options to hide/protect data in the file:
You could encrypt either the whole pdf or partial datasets. Note that an encryption/decryption always requires a secret. When the file is fully encrypted you can't read it without the key.
You can add additional similiar dataframes but set them invisible in the pdf. Note that this technique blows up the size of the file.
You can use scripting languages which dynamicly build up your pdf. Be aware that this could look suspicious to users or any anti-virus software.
You can use tools steganography to hide your data. For example a tool you could use is steghide
You can simply compress datastreams in the pdf, e.g. using gzip or similiar compression tools. That way you can't read it directly. However that is easy to recognize and to uncompress for anyone.
Related
This file I downloaded is supposed to be a PDF (I think, could be just a text file for all I know) but see the picture below for what the file looks like. Does anyone know what this is or if it can be converted?
If it's from a PDF file, it is likely to be Flate encoded (the same type of compression as is used with zip files, but no you cannot open a PDF file with a zip utility). This is the most common compression in a PDF for non-image data. It's not ASCIIHex or ASCII85 encoded. It could be but likely isn't LZW or RunLength (RLE) encoded. If it is image data, it could be CITTFax, JBIG2, DCT (essentially JPEG), or JPX (JPEG 2000) encoded.
In some cases, it is possible that parts of a PDF might be encoded by more than one of these filters, so a combination of say, DCT and ASCII85, could be used. but this isn't as common anymore.
Or the PDF file could be encrypted, in which case you have a choice of RC4 or different flavors of AES encryption. It's also possible that custom encryption was used (e.g. if the PDF file is an E-Book).
The screenshot you provided doesn't contain enough information to determine what the case may be for that particular part of the file, but the end conclusion you basically need to read your PDF file with software that understands the PDF format; a text editor won't do.
I wasn't able to find anything on the internet and I get the feeling that what I want is not such a trivial thing. To make a long story short: I'd like to get my hands on the underlying code that describes the PDF document of a selected area from a .pdf file. I've been looking for libraries or open source readers but couldn't find anything useful yet.
Does there exist something that might be able to accomplish my needs here or anything that might be reused (like an open source reader) to get there a little faster and not having to write everything from scratch?
You can convert a whole PDF document to PostScript using pdftops, one of the utilities from the poppler PDF rendering library.
This utility enables you to convert individual pages, which is at least a start.
If you just want to extract bitmapped images, try pdfimages from the same package. This extraction can also be restricted to individual pages.
The poppler library was originally written for UNIX-like systems, but there are a couple of windows builds available.
The open source tool from iText called iText RUPS does what you want, showing you all the PDF commands for a particular PDF and allow you to visualize the structure and relationships.
http://sourceforge.net/projects/itextrups/
We have created PDFs from converting individual PostScript pages into a single PDF (and embedding appropriate fonts) using GhostScript.
We've found that an individual page of the PDF cannot be linked to; for example, through the usage of
http://xxxx/yyy.pdf#page=24
There must be something within the PDF that makes this not possible. Are there any specific GhostScript options that should be passed when creating the PDF that would allow this type of page-destination link to work?
There are no specific pdfwrite (the Ghostscript device which actually produces PDF) options to do this. Without knowing why the (presumably) web browser or plugin won't open the file at the specified page its a little difficult to offer any more guidance.
What are you using to view the PDF files ?
Can you make a very simple file that fails ? Can you make that file public ?
If I can reproduce the problem, and the file is sufficiently simple, it may be possible to determine the problem. By the way, which version of Ghostscript are you using ?
Does PDF and/or Adobe Reader support including an image by URL so that you can insert a dynamic images from a web server into a document?
The answer to your question is both yes and no. If you look in the PDF spec (I'm going by version 1.7) in section 7.11.5, you'll see that a stream within a PDF document can be represented by an URL. So yes, you can go ahead and specify that a PDF has, say, its image content in the specified URL.
The problem will be that when you specify an image within PDF, you are specifying a PARTICULAR image that must have a particular data length and encoding. Simply specifying dimensions, dct compression (aka jpg), and URL is not enough. Images are contained in streams of a particular length. If the stream is too long or too short, it is considered an error.
So you can have images dynamically served up, provided that they are always exactly the same byte length. I think. And I say this because the specification is somewhat ambiguous as to what happens when you set the length to 0 in the stream dictionary.
Now, is doing this practical? Maybe - you'll need a fairly strong PDF toolkit in order to be able to author these documents. And if you have that, I think you'd be better off authoring the entire PDF document that your clients want on the fly rather than trying to substitute an image at read time.
I don't believe you can place a dynamic image in a PDF document in this manner. It's possible to dynamically create an entire PDF document using web-hosted content (using PHP, Coldfusion, etc.) but changing that content later on the web server will not dynamically update previously generated PDF documents, which is what it sounds like you want to do.
As PDFs are meant to be portable by nature (PORTABLE Document Format) and thus, not always viewed online, this goes against the very principle of the document format, and is not supported as far as I know.
You could include a reference to an image at the time of generation of the PDF, but said image will embedded into the PDF, not linked.
You could use pdf.js and modify the rendering methods slightly so that you inject your image. You can find pdf.js here: https://github.com/mozilla/pdf.js
You can also use FlexPaper which has an API that allows you to overlay your document with images
http://flexpaper.devaldi.com/
My objective is to extract the text and images from a PDF file while parsing its structure. The scope for parsing the structure is not exhaustive; I only need to be able to identify headings and paragraphs.
I have tried a few of different things, but I did not get very far in any of them:
Convert PDF to text. It does not work for me as I lose images and the structure of the document.
Convert PDF to HTML. I found a few tools that helped me with this, and the best one so far is pdftohtml. The tool is really good presentation wise, but I haven't been able to successfully parse the HTML.
Convert PDF to XML. Same as above.
Anyone has any suggestions on how to tackle this problem?
There is essentially not an easy cut-and-paste solution because PDF isn't really very interested in structure. There are many other answers on this site that will tell you things in much more detail, but this one should give you the main points:
If identifying text structure in PDF documents is so difficult, how do PDF readers do it so well?
If you want to do this in PDF itself (where you would have the majority of control over the process), you'll have to loop over all text on pages and identify headers by looking at their text properties (fonts used, size relative to the other text on the page, etc...).
On top of that you'll also have to identify paragraphs by looking at the positioning of text fragments, white space on the page, closeness of certain letters, words and lines... PDF by itself doesn't even have a concept for a "word", let alone "lines" or "paragraphs".
To complicate things even more, the way text is drawn on the page (and thus the order in which it appears in the PDF file itself) doesn't even have to be the proper reading order (or what us humans would consider to be proper reading order).
PDF parsing for headers and its sub contents are really very difficult (It doesn't mean its impossible ) as PDF comes in various formats. But I recently encountered with tool named GROBID which can helps in this scenario. I know it's not perfect but if we provide proper training it can accomplish our goals.
Grobid available as a opensource on github.
https://github.com/kermitt2/grobid
You may do use the following approach like this with iTextSharp or other open source libraries:
Read PDF file with with iTextSharp or similar open source tools and collect all text objects into an array (or convert PDF to HTML using the tool like pdftohtml and then parse HTML)
Sort all text objects by coordinates so you will have them all together
Then iterate through objects and check the distance between them to see if 2 or more objects can be merged into one paragraph or not
Or you may use the commercial tool like ByteScout PDF Extractor SDK that is capable of doing exactly this:
extract text and images along with analyzing the layout of the text
XML or CSV where text objects are merged or splitted into paragraphs inside a virtual layout grid
access objects via special API that makes it possible to address each object via its "virtual" row and column index disregarding how it is stored inside the original PDF.
Disclaimer: I am affiliated with ByteScout
PDF files can be parsed with tabula-py, or tabula-java.
I made a full tutorial on how to use tabula-py on this article. You can tabula in a web-browser too as long as you have installed Java.
Unless its is Marked Content, PDF does not have a structure.... You have to 'guess' it which is what the various tools are doing. There is a good blog post explaining the issues at http://blog.idrsolutions.com/2010/09/the-easy-way-to-discover-if-a-pdf-file-contains-structured-content/
As mentioned in the answers above, PDF's aren't very easy to parse. However, if you have certain additional information regarding the text that you want to parse, you can pull it off.
If your headings are positioned at specific parts of the page, you can parse the PDF file and sort the parsed output by coordinates.
If you have prior knowledge of the spacing between headings and paragraphs, you could also leverage this information to parse the file.
PDFBox is a PDF parsing tool that you can use for extracting text and images on top of which you can define your custom rules for parsing.
However, for parsing PDFs you need to have some prior knowledge of the general format of the PDF file. You can check out the following blogpost Document parsing for more information regarding document parsing.
Disclaimer:I was involved in writing the blogpost.
iText api:
PdfReader pr=new PdfReader("C:\test.pdf");
References:
PDFReader