Latex using eps images builds slowly [closed] - pdf

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Working with a latex document with eps images as in the example below...
\documentclass[11pt]{paper}
\usepackage[dvips]{graphicx}
\usepackage{fullpage}
\usepackage{hyperref}
\usepackage{amsmath}
\DeclareMathSizes{8}{8}{8}{8}
\author{Matt Miller}
\title{my paper}
\begin{document}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=2in] {Figuer.eps}
\end{center}
\caption{Figure\label{fig:myFig}}
\end{figure}
\end{document}
When I got to build my latex document the time it takes to build the document increases with time. Are there any tips or tricks to help speed up this process?
latex paper.tex; dvipdf paper.dvi

Some additional ideas:
try making a simpler figure (e.g. if it's a bitmapped figure, make a lower-resolution one or one with a low-resolution preview)
use pdflatex and have the figure be a .jpg, .png, or .pdf source.
I generally take the latter approach (pdflatex).

How big are the eps files? Latex only needs to know the size of the bounding box, which is at the beginning of the file.
dvips (not dvipdf) shouldn't take too much time since it just needs to embed the eps into the postscript file.
dvipdf, on the other hand has to convert the eps into pdf, which is expensive.

Indeed, you can use directly
pdflatex paper.tex
Few changes are required.
Convert your graphics from EPS to PDF before running pdflatex. You need to do it only once:
epstopdf Figuer.eps
If will produce Figuer.pdf which is suitable for pdflatex. In your example dvipdf does it on every build.
In the document use
\usepackage[pdftex]{graphicx} % not [dvips]
And to include graphics, omit the extension:
\includegraphics[width=2in] {Figuer} % but {Figuer.pdf} works too
It will choose Figuer.pdf when compiled by pdflatex and Figuer.eps when compiled by latex. So the document remains compatible with legacy latex (only remember to chage \usepackage{graphics}).

Reducing the file size of your EPS files might help. Here are some ideas how to do that.
If you have the original image as JPEG, PNG, TIFF, GIF (or any other sampled image), reduce the original file size with whatever tools you have, then convert to EPS using sam2p. sam2p gives you much smaller EPS file sizes than what you get from most popular converters.
If your EPS is vector graphics, convert it to PDF using ps2pdf14 (part of Ghostscript), then convert back to eps using pdftops -eps (part of xpdf). This may reduce the EPS file size a lot.

As a quick fix, try passing the [draft] option to the graphix package.

Are you using a DVI previewer or going straight to pdf?
If you go all the way to pdf, you'll pay the cost of unencoding and reencoding (I used to have that problem with visio diagrams). However, if you can generate PSs most of the time or work straight with the DVI, the experience would be manageable.
Also, some packages will create .pdf files for you from figures, which you can then embed (I do that on my mac)

Related

How to convert scanned document images to a PDF document with high compression?

I need to convert scanned document images to a PDF document with high compression. Compression ratio is very important. Can someone recommend any solution on C# for this task?
Best regards, Alexander
There is a free program called PDFBeads that can do it. It requires Ruby, ImageMagick and optionally jbig2enc.
The PDF format itself will probably add next to no overhead in your case. I mean your images will account for most of the output file size.
So, you should compress your images with highest possible compression. For black-and-white images you might get smallest output using FAX4 or JBIG2 compression schemes (both supported in PDF files).
For other images (grayscale, color) either use smallest possible size, lowest resolution and quality, or convert images to black-and-white and use FAX4/JBIG2 compression scheme.
Please note, that most probably you will lose some detail of any image while converting to black-and-white.
If you are looking for a library that can help you with recompression then have a look at Docotic.Pdf library (Disclaimer: I am one of developers of the library).
The Optimize images sample code shows how to recompress images before adding them to PDF. The sample shows how to recompress with JPEG, but for FAX4 the code will be almost the same.

Reducing the size of pdf generated from software using proprietary fonts

I am trying to bring an Indian Magazine online. This magazine is typed in CorelDraw using the proprietary Devenagari font (http://www.modular-infotech.com/html/shreelipi.html). So these guys have provided a USB dongle that you have to have attached to the machine when you want to access the fonts, and this software has been in use for past 10 years.
To put the magazine online, we've tried to convert it to pdf (by printing). The resultant pdf size is of the order of 30-50MB, even when the pdf does not have even a single image. I am guessing it converts the whole text into an image
It would be really difficult for users to read this magazine given its size. Though when I convert it to .swf format (for add flipbook kind of functionality) - the size reduces to 5-6MB. But there are people who like to download the magazine and then read. I have had no luck reducing the size of pdf.
I have done lot of research on web. The postscript, primo pdf do not help much. The best I could get was 30% reduction using DocuCom pdf printer. But it is still 20MB. I have tried to play with resolution, compression and quality but the best I could get was 18MB.
Ideally I would like to reduce it to less than 2MB.
I would be really grateful if you could help me reduce the size of the pdf! Considering that it has no images, I am hopeful that I can get some really good compression.
The (35MB) magazine can be downloaded from: http://merajhola.in/jin-march.pdf
I can't see any easy way to reduce the size of this PDF. There are no embedded fonts and all the text is drawn using vector graphics primitives. No amount of tweaking the resolution, compression and quality will have a significant improvement.
One possible option would be to embed the font as a subset rather than use vector graphics. That will almost certainly make a big difference, however I doubt the proprietary font license will allow it.
I'm sorry, but this Shree-Lipi thing just sounds wrong in 2012. It would be much better to use proper OpenType fonts with modern (say InDesign) or free (say LuaTeX) software.

Compressing JPG page to PDF with various compressions/settings [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I would like to take a single page jpg, and experiment with various pdf compression settings and other settings (to analyse resultant pdf size and quality) - can anyone point me towards decent tools to do this, and any useful docs/guides?
Adobe Acrobat Distiller, Ghostscript, possibly others. Acrobat has its own manual, Ghostscript documentation can be found at:
http://svn.ghostscript.com/ghostscript/tags/ghostscript-9.02/doc/Ps2pdf.htm
for the current version PostScript to PDF conversion (print your JPEG to a PostScript file before starting).
If your original is a JPEG, then the best quality output will be to do nothing at all to it, simply wrap it up in PDF syntax.
If you insist on downsampling the image data (which is the only way you are going to reduce the size of a JPEG image) then you would be advised not to use DCT (JPEG) compression in the PDF file, as this will introduce objectionable artefacts.
You should use a lossless compression scheme instead, eg *"Flate".
Your best bet would be to go back to an original which has not been stored as a JPEG, downsample in a decent image application and then convert to JPEG and wrap that up in a PDF.
Docotic.Pdf library can be used to add JPEGs (with or without recompression) to PDF.
Please take a look at sample that shows how to recompress images before adding them to PDF. With help of the library you can recompress existing images too.
Disclaimer: I work for Bit Miracle, vendor of the library.
If you're OK working with .NET on Windows, my company, Atalasoft, has tools for creating image PDFs. You can tweak the compression very easily using code like this:
public void WriteJpegPdf(AtalaImage image, Stream outStream)
{
PdfEncoder encoder = new PdfEncoder();
encoder.JpegQuality = 60; // 0 - 100
encoder.Save(outStream, image, PdfCompressionType.Jpeg);
}
This is the simplest way of hitting the jpeg quality. It will override your setting if the image isn't 24 bit rgb or 8 bit gray.
If you are concerned with encoding a bunch of files but want fine-grained control over compression, the encoder has an event, SetEncoderCompression, that is invoked before the image is encoded to let you see what the encoder chose and you can override it if you like.
FWIW, I wrote most of the PDF Encoder and the underlying layer that exports the actual PDF.

Tools for JPEG optimization? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Do you know of any tools (preferrably command-line) to automatically and losslessly optimize JPEGs that I could integrate into our build environment? For PNGs I'm currently using PNGOUT, and it generally saves around 40% bandwidth/image size.
At the very least, I would like a tool that can strip metadata from the JPGs - I noticed a strange case where I tried to make thumbnail from a photograph, and couldn't get it smaller than 34 kB. After investigating more, I found that the EXIF data was still part of the image, and the thumbnail was 3 kB after removing the metadata.
And beyond that - is it possible to further optimize JPGs losslessly? The PNG optimizer tries different compression strategies, random initialization of the Huffmann encoding etc.
I am aware that most savings come from the JPEG quality parameter, and that it's a rather subjective measure. I'm only looking for a tool that can be run as a build step and that losslessly squeezes a few bytes from the images.
I wrote a GUI for all image optimization tools I could find, including MozJPEG and jpegoptim that optimize Huffman tables, progressive scans, and (optionally) remove invisible metadata.
If you don't have a Mac, I also have a basic web interface that works on any platform.
I use libjpeg for lossless operations. It contains a command-line tool jpegtran that can do all you want. With the commandline option -copy none all the metadata is stripped, and -optimize does a lossless optimization of the Huffmann compression. You can also convert the images to progressive mode with -progressive, but that might cause compatibility problems (does anyone know more about that?)
[WINDOWS ONLY]
RIOT(Radical Image Optimization Tool)
This is the greatest image optimization tool I have found!
http://luci.criosweb.ro/riot/
You can easily get a 10MB image down to 800KB through sub-sampling.
It supports PNG, GIF, and JPEG.
It even integrates into context menus so you can send pictures straight there.
Allows you to rotate, re-size, compress to specified KB's, and more. Also has plugins for GIMP and IrfanView and other things.
There is also a DLL available if you want to incorporate it into your own programs or java script / c++ program.
Another alternative is http://pnggauntlet.com/ PNGGAUNTLET takes forever but it does a pretty good job.
[WINDOWS ONLY]
A new service called JPEGmini produces incredible results. A shame that it's online only. Edit: It's available for Windows and Mac now
Tried a number of the suggestions above - I personally was after lossless compression.
My sample image had an original size of 67,737 bytes.
Using kraken.io, it went down to 64,718
Using jpegtran, it went down to 64,718
Using yahoo smush-it, it went down to 61,746
Using imagemagick (-strip), it went down to 65,312
The smush.py option looks promising, but the installation was too complex for me to do quickly
jpegrescan looks promising too, but seems to be unix and I'm using windows
jpegmini is NOT lossless, but I can't tell the difference (down to 22,172)
plinth's Altrasoft jpegstripper app does not work on my windows 7
jpegoptim is not windows - no good for me
Riot (keeping quality at 100%) got it down to 63,416 and with chroma subsampling set to high, it got it down to 61,912 - I don't know if that is lossless or not though, and I think it looks lighter than the original.
So my verdict is yahoo smushit if it must be lossless
I would try Imagemagick. It has tons of command line options, its free and have a nice license.
http://www.imagemagick.org
There seems to be an option called Strip that may help you:
http://www.imagemagick.org/script/command-line-options.php#strip
ImageOptim is really slick. The command line option posted by the author will populate the GUI and show progress. I used jpegtran for optimizing and converting to progressive, then ImageOptim for further progressive optimizations and for other file types.
Reuse of script code also found in this forum (all files replaced in place):
jpegtran
for file in $(find $DIR -type f \( -name "*.jpg" -or -name "*.jpeg" -or -name "*.JPG" \)); do
echo found $file for optimizing...
jpegtran -copy comments -optimize -progressive -outfile $file $file
done
ImageOptim
for file in $(find $DIR -type f \( -name "*.jpg" -or -name "*.png" -or -name "*.gif" \)); do
do
echo found $file for optimizing...
open -a ImageOptim.app $file
done
In case anyone's looking, I've written an offline version of Yahoo's Smush.it. It will losslessly optimise pngs, jpgs and gifs (animated and static):
http://github.com/thebeansgroup/smush.py
You can use jpegoptim which will losslessly optimize jpeg files by default. The --strip-all option strips all extra embedded info. You can also specify a lossy mode with the --max switch which is useful when you have images saved with a very high quality setting, which is not necessary for eg. web content.
You get similar optimization as with jpegtran (see answer by OutOfMemory) but jpegoptim can't save to progressive jpegs.
I've written a command line tool called 'picopt' (similar to ImageOptim) that uses external programs to optimize JPEGs, PNGs, GIFS, animated GIFS and even comic book archive contents (CBR/CBZ).
This is suitable for use with homebrew on OS X or Linux systems where you have installed tools like jpegrescan, jpegtran, optipng, gifsicle, etc.
https://github.com/ajslater/picopt
I too would recommend ImageMagick. It has a command line option to remove EXIF metadata
mogrify -strip image.jpg
There are plenty of other tools out there that do the same thing.
As far as recompressing JPEGs go, don't. JPEGs are lossy to start with, so any form of recompression is only going to hurt image quality. However, if you have losslessly encoded images, some encoders do a better job than others. I have noticed that JPEGs done with Photoshop consistently look better than when encoded with ImageMagick (despite the same file size) due to complicated reasons. Furthermore (and this is relevant to you), I know that at least Photoshop can save JPEGs as optimized which means they drop compatibility with some stuff that you probably don't care about to save a couple of KB. Also, make sure you don't have any colour profiles embedded and you may be able to save another couple of KB.
I would recommend using http://kraken.io It's ultra-fast webapp which will optimize your PNG and JPEG files far better than smush.it does.
I recommend to use JpegOptim, it's free and really nice, you can specify the quality, the size you want ... And easy to use in command line.
JpegOptim
May I recommend this for near-transparency:
convert 'yourfile.png' ppm:- | jpeg-recompress -t 97 -q veryhigh -a -m smallfry -s -r -S disable - yourfile.jpg
It uses imagemagick's convert and jpeg-recompress from jpeg-archive.
Both are open-source and work on Windows, Mac and Linux. You may want to tweak the options above for different quality expectations.

Using ps2pdf on EPS files with PNG used for bitmaps?

We're currently using ps2pdf to convert EPS files to PDF. These EPS files contain both vector information (lines and text) and bitmap data.
However, by default ps2pdf converts the bitmap components of these images to JPG as they're embedded within the PDF, whereas for the type of graphics we have (data visualisation) it would be much more appropriate to use lossless compression. PDF supports PNG, so it should be possible to achieve what we're trying to do, but I'm having trouble finding a relevant option in the somewhat intimidating manual.
So the short question is: what is the correct way to write this?
    ps2pdf -dPDFSETTINGS=UsePNGinsteadOfJPGcompression input.eps output.pdf
The answer is not -dUseFlateCompression, since that option refers to using Flate instead of LZW compression; both are lossless but LZW was covered by patents for a while. Since that's not a problem any more, the option is ignored.
Instead, the options called to achieve lossless encoding of bitmap data are: (all four of)
-dAutoFilterColorImages=false
-dAutoFilterGrayImages=false
-dColorImageFilter=/FlateEncode
-dGrayImageFilter=/FlateEncode
You might also want to do the same thing with MonoImageFilter as well, but I assume /CCITTFaxEncode does a reasonable job there so it's not too important.