I'm working on a calculation program which creates graphs from input data with ZedGraph. My client would like to embed those graphics into Microsoft Word and the publish the document as PDF. Both PNGs and enhanced metafiles produce badly rastered results in the PDF.
I've tested this with Office 2007 and the "built-in" PDF publisher.
Can you recommend any workflow that leads to not breaking the vectorized data on the way to PDF?
Update
Thanks for all answers. It turned out, that .net actually doesn't create metafiles when writing to disk. See the respective question. Once I started using P/Invoke to create real metafiles on disk (instead of the automatic PNG fallback) the quality of the generated PDFs and prints improved vastly.
What about embedding Excel graphs?
The shareware graphics filter importps allows you to insert PDF and PS files into Word documents.
A demo version that scrambles colors is available at:
http://www.helga-glunz.homepage.t-online.de/importps
If all else fails, you might try creating your PNG files at crazy-high resolution. This might make the rasterization fine-grained enogh to not be noticeable.
I don't know anything about ZedGraph, but if you can export to (or somehow get to) an EPS file, that should work.
I often need to get vector artwork out of a PDF for use in Word, and to do this I usually go via Adobe Illustrator to save as an EPS. [Illustrator just happens to be something that I have available - I'm not saying there's anything magical about it; you may be able to create EPS files via another route.]
I see you're using Office 2007 and I can't say I have much experience with that, but the situation with Word 2003 is that you can insert an EPS that was exported from Adobe Illustrator if you choose "Illustrator 8.0" format when exporting from Illustrator. Newer versions of Illustrator seem to create files that Word 2003 can't handle. (Word 2007 may be better in this respect).
Related
I wasn't able to find anything on the internet and I get the feeling that what I want is not such a trivial thing. To make a long story short: I'd like to get my hands on the underlying code that describes the PDF document of a selected area from a .pdf file. I've been looking for libraries or open source readers but couldn't find anything useful yet.
Does there exist something that might be able to accomplish my needs here or anything that might be reused (like an open source reader) to get there a little faster and not having to write everything from scratch?
You can convert a whole PDF document to PostScript using pdftops, one of the utilities from the poppler PDF rendering library.
This utility enables you to convert individual pages, which is at least a start.
If you just want to extract bitmapped images, try pdfimages from the same package. This extraction can also be restricted to individual pages.
The poppler library was originally written for UNIX-like systems, but there are a couple of windows builds available.
The open source tool from iText called iText RUPS does what you want, showing you all the PDF commands for a particular PDF and allow you to visualize the structure and relationships.
http://sourceforge.net/projects/itextrups/
Given a PDF file. Can I find out which software/libraries (e.g. PDFBox, Adobe Acrobat, iText...) where used to created/edit it?
The Adobe specification defines the Producer field (see 'Mac OS X 10.5.6 Quartz PDFContext' in screenshot nimeshjm's answer) as the name of the application that "converted from another format to PDF". In case of generating a PDF programmatically, the PDF isn't really converted so you will normally find the name of the generating SDK here.
The Creator field is related and is defined as the name of the application that created the document from which the PDF was converted. This is typically MS Word or so.
Note that this is all by convention. In practice, you cannot really rely on this and you may encounter for example empty Producer fields.
You can try opening the file in Adobe Acrobat Reader and look at the properties.
You can find this in: File -> Properties in Adobe Acrobat Reader after you open the pdf file.
You can probably get away without any PDF libraries for this type of operation. It won't be 100% reliable but I think you can probably assume 99% reliability.
So... write some code to open your PDF as a text stream and seaarch down for /Producer. You will find something like this:
69 0 obj
<<
/Creator (PDF+Forms 2.0)
/CreationDate (D:20010627111809)
/Title (Demo)
/Producer (Cardiff Software - TELEform 7.0)
/ModDate (D:20010627111810-05'00')
>>
Grab the bits between the parentheses and Bob's your uncle. Technically the text can be stored in other formats to but I think those will be pretty uncommon for this particular type of entry.
If you can't find anything here then look for the XMP data which is always guaranteed to be in clear text. It will look something like this,
39 0 obj
<</Subtype/XML/Length 15172/Type/Metadata>>stream
<?xpacket begin='' id='W5M0MpCehiHzreSzNTczkc9d'?>
<x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="Adobe XMP Core 4.0-c320 44.293068, Sun Jul 08 2007 18:10:11">
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<rdf:Description rdf:about=""
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:xap="http://ns.adobe.com/xap/1.0/"
xmlns:xapGImg="http://ns.adobe.com/xap/1.0/g/img/"
xmlns:xapMM="http://ns.adobe.com/xap/1.0/mm/"
xmlns:pdf="http://ns.adobe.com/pdf/1.3/"
dc:format="application/pdf"
xap:CreatorTool="Adobe Illustrator CS2"
xap:CreateDate="2006-05-04T15:53:27-07:00"
xap:ModifyDate="2006-05-04T15:53:27-07:00"
xap:MetadataDate="2006-05-04T15:53:27-07:00"
xapMM:DocumentID="uuid:61AC83CBC0DBDA11A32BC847EF128E34"
xapMM:InstanceID="uuid:cba15bf3-d7da-4a4e-a563-fc20d13e258a"
pdf:Producer="Adobe PDF library 7.77">
<dc:title>
<rdf:Alt>
<rdf:li xml:lang="x-default">3.01 PDF components</rdf:li>
</rdf:Alt>
</dc:title>
...
The combination of these two is going to be practically always right. If you want 100% reliablity then by all means use a PDF library but for many purposes this should be sufficient.
My replies may feature concepts based around ABCpdf. It's what I work on. It's what I know. :-)
It is usually difficult to determine which software actually designed a PDF because most of Microsoft Office product can convert an edited file to PDF. By this I mean, opening a regular typed document, you have the option to save it as PDF. If you are familiar with Powerpoint slides, it can be easy to tell based on the design once the file is in PDF.
Where as on the other hand, Adobe Acrobat has the ability to create the file like those application forms we often download (from an embassy site, immigration site, etc).
Other software such as Adobe Photoshop, Illustrator, etc... can save files as PDF. Hope this help.
We are developing a printing server that allows user to upload a DOC and print it out via HP ePrint. It needs to support page extraction.
I tried to use macro (with the help of Adobe Acrobat Reader Pro and MS Word) to extract pages into PDF. But it turns out that the size of PDF may be larger in size than expected.
Is there any way to extract pages (without lossing format - E.g. Table in DOC) from DOC to DOC, so that the size can be approximately the size?
This is a difficult requirement. It sounds like you have run into 2 problems (large PDFs and format loss) at the outset. You should probably say more about what you mean by "extraction" and why PDF is part of your solution because that's quite different from "upload and print" and "doc to doc". That way readers will have more suggestions for you.
I would suggest you try to approach the problem from a different angle if possible, because I suspect that you are unlikely to achive a good, efficent, stable result. One possible approach is to turn the DOC into PDF and then use iText or some other PDF library to manipulate the PDF before printing. It really depends on what you are trying to achieve - the specifics of your extract/merge/convert.
I have to extract text from invoices and bills pdf files
The files layouts can get complex, though its mostly filled with tables.
I've read a few dozens articles already about the pdf format, how easy it is for our brain to grasp it and how hard it is for a machine to understand its structure.
Also downloaded a few tools like the python's pdfminer and some java tools, some even have rule based layout extraction, like LA-PDBtext these are all great libraries, leaving you the final step.
Adobe also has an online service called exportPdf but it can't be customized
Bottom line, I understand that in order to extract text from structured pdf files and convert it to XML for example, there should be some level of manual work.
I also found From Data Extractor, a non free tool with the ability to set extraction rules that claims to do the job, though its hard to find a proper manual and it runs only on windows.
I thought I may even try a to convert those files to images and try tesseract-ocr but decided to ask for advice here before I spend more time on it.
I'll be very grateful if someone with such experience give me a hint.
I've done a lot of PDF extraction and I can confirm as you've already discovered that it can be a painful process to start. One of the important things to understand is that there is no concept of "tables" within a PDF, just text that happens to have lines around it. Also, there's no guarantee that the linear order of text within the PDF code actually matches the visual order when printed. In other words, there's no guarantee that "hello world" is written in that order, it could be draw 'word' at coord 20 then draw 'hello' at coord 10. Most PDF creators don't do this but still there's no guarantee. The more creative a PDF creator is (InDesign, Illustrator, etc) the more likely the text is going to be harder to get out. And actually, once a designer starts messing with fonts too much some programs will sometimes actually output words one character at a time, changing the font just slightly each time.
That said, I'd recommend the first one that you looked at, LA-PDFText. You can run it in discovery mode (blockify) from which you can create rules. I don't have Java installed anymore so I can't test it but it seems very promising.
Your second one, A-PDF Form Data Extractor, only really works with actual PDF forms. If this is your case I'd recommend just using an open source solution like iText/iTextSharp.
The last OCR one makes me cringe. I just can't imagine going through those hoops would get you better text representation than parsing the PDF. But then again, PDF is a visual format so maybe it would.
Personally I use iText/iTextSharp for this kind of thing but I also like to do things the hard way.
It is not clear if you are looking for the development tool to automate the data extraction from bills and invoices or just for the one time tool (utility) that can be used by the non-developer?
Anyway here are some specialized tools including engines they use:
Tabula (open-source, especially designed to extract data from tables in PDF. Can export shell scripts for batch processing, runs as the localhost web service, powered by JRuby Tabula engine)
Viet OCR (open-source .NET desktop utility for text extraction from PDF and images, based on tesseract oct engine)
Bytescout PDF Viewer (freeware closed source .NET utility, detects and extracts tables, including scanned invoices, powered by PDF Extractor SDK)
DISCLAIMER: I work for ByteScout.
Like the title says. Reason I ask is that we're converting PDFs to formatted ASCII text (using pdftotext) and only want to display the ones that look reasonably sane.
PPT files tend to have text over images, diagonal text and others things that don't translate to ASCII very well, so we'd like to filter them out if we can.
The creating application of a PDF is listed in its XMP metadata. You can see this quite easily in Acrobat 9 (and I believe earlier): go to File > Properties, click Additional Metadata..., then go to Advanced and it's listed under both XMP Core Properties and PDF Properties:
xmp:CreatorTool: Microsoft PowerPoint
pdf:Creator: Microsoft PowerPoint
I'm guessing you want to find this programatically, so you'll need to find a library to read this metadata that works with your language. Here is a list of some XMP tools.
Short answer:
No, I don't think so.
Long answer:
No, I don't think so, because there are may ways to convert a PowerPoint file to pdf, for example Adobe Acrobat and PDFCreator and many many others. It's up to the converters to embed specific information in the PDF file, even if you find a way to detect PowerPoint-source pdf from one convert, the same method may not work for another.
Even longer answer:
No, I don't think so, because of the reasons described in the "long answer". And I don't think detecting the source of the PDF is the best approach to the problem you are trying to solve. Not just PowerPoint produces overlapped text and images. I think it's much better to detect the actual layout of the PDF file. If there are overlay of image and text, then you do some filtering or pre-processing to cater for that.
Your reasoning is very arbitrary - there are surely plenty of PPT files without the features you describe, and plenty of PDF files with them, that were generated from another source.
In theory a better method would just be to detect when these "unwanted" situations occur. However, even though the PDF format is partly open (only for reading, apparently, so it's not truly an open format), extracting complex data like that would be incredibly difficult.
All PDFs can have this problem regardless of their source. Most desktop publishing suites are capable of outputting PDF and are often sold boasting their high quality and flashier PDF presentations ...
A "saner" method would be to use a PDF parser, ITextSharp, or pdfNet...etc, Using the library of your choice, find all image rectangles, and all text rectangles, SORT THE RECTANGLES, and then see if there is substantial overlap of text and image rects -- ignoring image to image overlaps. If so, reject the page and/or document.
That won't be perfect, but at least it's going to catch many PDFs that aren't sane, regardless of source. Other heuristics to add would include color analysis. (i.e. are the colors in the overlapping region sufficiently different to allow "sane" results?)
Best of luck to you
It might put its name in the creator or producer info, but I don't have a copy to check this theory with.
In general, it is not an easy task to programmatically determine (reliably) where a file came from or how it was generated based on its contents. After all, a file is just a collection of bits.
Unless you have a lot of resources to expend building the heuristics to determine whether a file looks "reasonably sane" according to your needs, I would consider this a task for human beings.
some converter from ppt to pdf preserve creator in comments at begin of pdf.
I think that PDF's generated from most applications seem to be the same. It may have some meta-data that you can read from the file...