I'm trying (in a script on a linux server) to shrink and rasterize several thousands PDF files that come from various CAD/CAM softwares and represent "big" drawings (as in, 800x600mm or the like) with lots of thin lines (as in, similar to a 0.2mm pen).
The rasterized files should have visible lines when printed on A5 or similar paper, so I have to kind of "shrink" the original drawing while preserving line thickness. As an example, when I open one of those PDF files on Mac OSX Preview, it does exactly that: when I zoom in and out it adjusts line thickness so they always look the same on screen.
I tried doing that with ImageMagick and tried lots of -density, -resize and various other settings without great success: the thin lines just get scaled down as anything else and end up being too thin (or to disappear completely, in some cases) to be discernible when printed to a small size. I've also read through its documentation without any success. Of course I'm also open to using other tools, as far as I can script it.
How could I "preserve line thickness" when rasterizing a vector PDF file in a script, just like Apple's Preview does when viewing the same file on screen?
Related
I receive postage labels from a supplier as single page pdf documents. The labels would fit on an A5 sheet but they are presented as a portrait within an A4 page, also in portrait orientation. I would like to be able to print two of these labels per A4 page to cut down on waste.
This can be achieved by rotating the page content without rotating the page itself. Or by resizing the page by swapping the height and width about the content. I am aware that both of these things can result in content being lost, which isn't a problem for my use case. Ideally I'd like a command line application that works on both Linux or Windows machines. Unfortunately, web searches for "rotate" or "resize" pdf will point to the many applications that just rotate or resize pdf pages along with the content which isn't what I want.
Similar questions:
With PdfBox: identical use case, see my comments on PdfBox below.
With iText: almost identical use case, I explicitly don't want any resizing of the content. See my comments on iText below as well.
Things I have investigated tried:
pdftk - too basic
ImageMagick - the original image contains transparency and the extent argument results in a visible loss of quality
pdfjam - also requires install of Latex and PdfPages. Ideally I'd like something that works on both Windows and Linux.
iText7 - the documentation isn't great. Looks like it was completely re-written in the last few years and the Nuget feed makes it clear that previous version, iTextSharp, is EOL. Consequently most of the examples one finds online (including on this site) are out of date. iText7 doesn't let you resize a page. I got as far as saving a document with a new page that was the right size but struggling to copy the content over. I think I could get what I wanted from this but it would take a long time and I'm trying to do something simple.
PdfBox - I've already tried one .NET library without success. Looking at the comments to the question I've linked above, this one seems to also have a version issue. I'm trying to do something really simple here, I will try this one if I exhaust all other avenues
Gimp - does what I want but I have to fire up the application, point and click quite a few times to rescale the image canvas, set the background and export
Screenshot the label from a pdf reader at 100% size and paste into a Word/LibreOffice doc. Sadly this is the most reliable method I have at the moment
I have example labels but they contain the name and address of people I've sent things to, I'd rather not upload them.
Try the command line tool cpdf from here: https://community.coherentpdf.com
cpdf -rotate-contents <angle> in.pdf -o out.pdf
to rotate contents without rotating the page. or...
cpdf -mediabox "100 100 600 500" in.pdf -o out.pdf
(and -cropbox and so on) to change page dimensions without altering content. Chapter 3 of the manual is of relevance.
You can also prepare the file by removing any page rotation whilst counter-rotating the content to leave the visual appearance unchanged:
cpdf -upright in.pdf -o out.pdf
Morning, everyone,
Quick question about PS2PDF. I use it to convert graphics that I produce directly in postscript to PDF. While there is no visual problem on PS files, I see a grid on my PDF viewer. At first I thought the problem was in the viewer, but it remains present when I compile my TeX files containing the figures with PDFLaTeX. Do you have any ideas for settings that can "fix" this display? Thanks in advance :)
Evince is independent of Ghostscript as far as PDF files are concerned, but I don't know how it can be viewing PostScript files.
I believe what you are seeing is an artefact of the PDF rendering engine in use, and the way the PDF file is constructed (which is itself dependent on the way the PostScript is constructed).
Much of the content is drawn by creating little rectangles which are intended to butt up against each other (and basically do). However, depending on the resolution, the precise numerical accuracy of the calculations and the accuracy of the co-ordinates, it can be the case that these rectangles do not quite touch ideally. There is a theoretical gap between them.
You can see this occur with Adobe Acrobat, and zooming in and out changes where the lines appear (it changes the effective resolution, thereby changing the calculations from user space to device space, ie to the actual pixels on screen).
I cannot say for sure that the same problem exists with Evince, but I expect it does. Withh Acrobat I can turn off anti-aliasing, which is where the problem really arises. Acrobat is attempting to insert an anti-aliased pixel between the two rectangles, which leads to these faint lines. Turning it off (In Acrobat X Edit->Preferences->PageDisplay->Smooth Line art) makes the lines disappear.
Ghostscript doesn't apply anti-aliasing by default, so these lines don't appear when rendering either the PostScript or the PDF files, but if I turn on anti-aliasing (-dGraphicsAlphaBits=4) then Ghostscript renders the lines in both the PostScript and the PDF file.
Essentially I think the problem is that your PDF viewer is using anti-aliasing and your PostScript viewer isn't, so they don't look the same.
// EDIT 26.03.2018 - Who wants to continue my work can have a look on my source-files https://github.com/n0l0cale/ocr-sampledata
I'm actually looking for some details about PDF Files. It's most important for me that the files will be usable for a very long time and if possible the OCR should be automatically applied for new files (which seems to be not really possible with Adobe Acrobat...).
For that I've been looking for different solutions how to OCR my PDF Files. I found three candidates which seems to be doing what they should do... (more or less). But all three variants have their pro&cons... But there seem to be different approaches how to store data in PDF Files.... for all three Variants... Let me explain:
a File OCRed with Adobe Acrobat:
https://github.com/n0l0cale/ocr-sampledata/blob/master/A4%20sample_ACROBAT.pdf
results in a file that Acrobat is able to open in one step (no preloading of any background layer) and after a preflight-script I'm able to see the text which is stored hidden:
a File OCRed with Abby Finereader:
https://github.com/n0l0cale/ocr-sampledata/blob/master/A4%20sample_ABBY.pdf
does not seem suitable for the default adobe preflight-script as it does not display any additional layers:
But far as I was able to reproduce these Files seems to have a Background-Text-Layer, which contains the OCRed Text, which is the underlying layer for the Image that is shown to the user at the end. Unfortunately this seems to be loaded separately and this is confusing while opening the file with Adobe Acrobat...
a File OCRed with Tesseract 4 (Alpha):
https://github.com/n0l0cale/ocr-sampledata/blob/master/A4%20sample_TESSERACT_oem2.pdf
is also doing some weird magic with the hidden text part:
But in all three cases I'm able to search for words in the files and see the text using "Remove hidden information" and selecting "hidden text":
I'm seriously confused.... Does anyone know how these programs are storing their hidden text information really?
S.
P.S.: For those wondering what this ominous preflight script is: https://theblog.adobe.com/hidden-gems-in-acrobat-dc-how-to-optimize-hidden-ocr-text/
Does anyone know how these programs are storing their hidden text information really?
You correctly have found out that the approach of Abby Finereader is different from that of Adobe Acrobat and of Tesseract:
Abby creates a page content stream in which first the text is drawn normally on the page and eventually covered by the scanned image.
Acrobat and Tesseract create content streams in which first the image is drawn and then the text is drawn invisibly (using text rendering mode 3 which draws nothing).
The difference between the latter two results is the choice of font used:
Acrobat uses regular standard 14 fonts for which a PDF viewer has a font program to render them as normal glyphs.
Tesseract uses a font GlyphLessFont it embeds a font program for into the result file. When rendered the glyphs in this font do not show as our normal Latin glyphs but merely as empty space.
Considering the visual effect you observed for the Abby result, the approach used by Acrobat or Tesseract might be preferable.
Whether one prefers fonts with visually recognizable glyphs (as used by Acrobat) or without (as used by Tesseract), is mostly a mere matter of taste. They are used only in the invisible rendering mode anyways.
I have several PDF files that have been OCR-processed (not by me). They contain both the scanned image and the OCR text. They seem to work fine in some viewers (iPhone/iPad), but not in others (Preview.app on macOS) which makes them somewhat awkward to read.
From googling around, it seems that the text & image may be layered incorrectly or there is a problem with the fonts used? I'm not even sure I'm using the correct vocabulary, as most hits I get are worthless.
Is it possible to use ghostscript or something to batch-fix these files?
Example of "bad" rendering:
Its impossible to say what's wrong with the PDF file (or viewer) without seeing the PDF file, which alse makes it hard to propose solutions!
You could certainly run the file through Ghostscript to the pdfwrite device, and use the -dFILTERTEXT switch to not process the text. The resulting document would therefore not contain the offending text, but would still contain the image.
Of course, this would then not be possible to search or highlight.
You could instead use -dFILTERIMAGE which would remove the original image leaving the text behind. But then anything in the original document which was not text would now be missing.
The usual 'best practice' is to have the text drawn in rendering mode 3, which makes no marks. This allows you to see the original image without the OCR'ed text interfering. Its possible that the viewer you are using is not honouring the text rendering mode, which would be a (fairly serious) bug in the viewer. The most recent versions of MacOS seems to have some nasty bugs in the Quartz PDF rendering engine.
The other way to do this is to draw the text first, then put the original image on top of it, but that's hard to get wrong, I suspect its more likely the text rendering mode.
EDIT
The PDF file first draws the text, then draws the image on top of the text. The underlying text should not appear. mkl is quite correct in his comment.
The correct way to fix this is to fix the consumer which is rendering it incorrectly. As I mentioned above the latest version of Quartz seems to have some fairly serious bugs, you might choose to raise this as a bug with Apple.
The only other solution would be to run this through something which will remove the text. Ghostscript can do this but there are implications; firstly it will no longer be possible to search/copy/paste text from the document. Secondly you would need to run quite a complex command line in order to prevent the decompressed JPX images being recompressed as JPEG, which would probably result in compromised quality. Finally the resulting file size would be larger.
How to add multipage background (eg. odd and even backgrounds) to 10 thousands pages PDF, with keeping output file as small as possible?
I'm doing massively multipage documents (eg. 10000 pages in one document). Each page has background, which I apply in such way:
I have lot of .dvi documents, I join them using dviconcat
next I do dvipdf on joined .dvi
and next I use pdftk to apply background, using pdftk infile.pdf background bg.pdf output outfile.pdf
In this way, I have fairly small file, eg. 200MB, comparing to situation when I produce lot of .pdf files with background and join them using pdftk and resulting file is eg. 2G.
I think it's because background is not repeated every page, but it's copy is stored in PDF only once and there is some kind of reference in pages.
Unfortunately, now I need to use 2pages / 2 sides background. Different background for odd pages and different for even. PDFtk don't know how to do it. I can prepare 10.000 pages background, but it will be huge (eg. 1G).
Any suggestion how could I accomplish it, without playing with multi-gigabytes files? Is it doable at all? If yes - with pdftk or some different tool?
One solution would be to do the background when you convert PostScript to PDF. Using a BeginPage procedure you can paint the background before you pain the page contents. By checking the page count in BeginPage you can choose which background to paint, so you can have different ones for even/odd/whatever pages.
If you specify each background as a PostScript form, then your BeginPage can be small, also (and rather more importantly) the current version of Ghostscript, 9.14, will attempt to pass PostScript forms into a PDF file as a PDF form,and it can identify and consolidate duplicates so it 'should' only embed each form once. This should result in the minimum possible file size.
However, this code is at an early stage of development and might not work for you, also you'll need to do some PostScript programming.
I'm not familiar with pdftk, but would it be possible to produce all the even page, add a background to them. Produce all the odd pages, add a different background, then use pdftk to merge and interleave the pages ?
NB Ghostscript doesn't handle .dvi files, so I'm rather at a loss to know how you use Ghostscript to 'join' them. Also, if you are somehow creating the PostScript files using Ghostscript, you would almost certainly be better off using Ghostscript to produce the PDF file directly. (I'm assuming here that you are using Ghostscript's ps2pdf, but even if you aren't it'll still be quicker to produce the PDF in one step, and almost certainly produce better output too)