I have searched many places but unable to find a pretty good solution as such.
So what I am trying to achieve is as below:
My program will have quite a lot of PDF docs which I will have to send via mail. There is a mail server limitation of 4 MB. So if all the PDFs are less than 4 MB it will be sent as a single mail. Else I will have to create multiple files each less than 4 MB.
Now my program works fine for the following cases:
1: Lots of files but each less than 4MB and hence keeping a tab during merging so that none of the merged files get over 4MB.
2: All files are pretty small and hence merging them together does not go to 4MB limit.
But there can be a scenario where there is one file which is, say, 14MB. I can split that document by pages. But that is also not a good solution as the pagesize is also not evenly distributed across the pages. I have used iText and PDFBox. Any help/pointer will be highly appreciated!
Imagine a 3000 KB document with ten pages and the following objects:
four font subsets used on every page, each about 50 KB
ten images that figure on a single page, each about 200 KB (one image per page)
four images that figure on every page, each about 50 KB
ten pages with content streams of about 25 KB each
about 350 KB for objects such as the catalog, the info dictionary, the page tree, the cross-reference table, etc...
A single page will need at least:
- the four font subsets: 4 times 50 KB
- the single image: 1 time 200 KB
- the four images: 4 times 50 KB
- a single content stream: 1 time 50 KB
- a slightly reduced cross-reference table, a slightly reduced page tree, an almost identical catalog, an info dictionary of identical size,... 200 KB
Together that's 850 KB. This means that you end up with 8500 KB (10 times 850 KB) if you split up a 10-page 3000 KB PDF document into 10 separate pages.
This example is the result of guess work (based on experience) and it assumes that the PDF is predictable. Most PDFs aren't:
some pages will require high-definition images (maybe even megaBytes), other pages won't have any images,
some pages will need many different fonts and font subsets (lots of kiloBytes), other pages will consist of merely some vector drawings (tiny content stream if compressed).
different pages can share a large amount of resources (Form XObjects, Image XObjects,...), other pages won't share any resources.
and so on...
You have noticed that yourself, as you write: I can split that document by pages. But that is also not a good solution as the pagesize is also not evenly distributed across the pages.
That's exactly why your question can have no other answer than: you'll have to do trial and error. No software can predict how much space is needed by a page before you look at what is needed by that page.
Update:
As David indicates in the comments, it is possible to calculate all the resources needed for a page, and to check if the current resources plus the needed resources exceed the maximum file size.
I have written a small example:
public void manipulatePdf(String src, String dest)
throws IOException, DocumentException {
Document document = new Document();
PdfCopy copy = new PdfSmartCopy(document, new FileOutputStream(dest));
document.open();
PdfReader reader = new PdfReader(src);
for (int i = 1; i <= reader.getNumberOfPages(); i++) {
// check resources needed for reader.getPageN(i);
copy.addPage(copy.getImportedPage(reader, i));
System.out.println("After adding page: " + copy.getOs().getCounter());
}
document.close();
System.out.println("After closing document: " + copy.getOs().getCounter());
reader.close();
}
I have executed the example on a PDF sample with 18 pages and this was the output:
After adding page: 56165
After adding page: 111398
After adding page: 162691
After adding page: 210035
After adding page: 253419
After adding page: 273429
After adding page: 330696
After adding page: 351564
After adding page: 400351
After adding page: 456545
After adding page: 495321
After adding page: 523640
After adding page: 576468
After adding page: 633525
After adding page: 751504
After adding page: 907490
After adding page: 957164
After adding page: 999140
After closing document: 1002509
You see how the file size of the copy gradually grows with each page that is added. After all pages are added, the size is 999140 bytes, and then the page tree and cross-reference stream are written, adding another 3369 bytes.
Where it says // check resources needed for reader.getPageN(i);, you could make a guesstimate of the size that will be added for the page and break out of the loop if it exceeds a maximum value.
Why would this be a guesstimate:
You could be counting objects that are already added. If you keep track of the objects (not that difficult), your guess will be more accurate.
I'm using PdfSmartCopy. Suppose that there are two identical objects inside your PDF. Bad PDF software often causes such problems. For instance: the same image bytes are added twice to the file. PdfSmartCopy can detect this and will reuse the first object it encounters instead of adding the redundant bytes of the extra object.
We currently don't have a reader.getTotalPageBytes() in PdfReader because PdfReader tries to use as little memory as possible. It won't load any objects into memory as long as these objects aren't needed. Hence it doesn't know the size of each object before the page is imported.
However, I'll make sure that such a method is added in the next release.
Update:
In the next version, you'll find a tool named SmartPdfSplitter that depends on a new class named PdfResourceCounter. You can use it like this:
PdfReader reader = new PdfReader(src);
SmartPdfSplitter splitter = new SmartPdfSplitter(reader);
int part = 1;
while (splitter.hasMorePages()) {
splitter.split(new FileOutputStream("results/merge/part_" + part + ".pdf"), 200000);
part++;
}
reader.close();
Note that this can result in a single-page PDF that exceeds the limit (which was set to 200000 bytes in the code sample) in case that single page can not be reduced to less bytes. In that case, splitter.isOverSized() will return true and you'll have to find another way to reduce the PDF.
PDF Clown supports page data size prediction without need of trial and error: since 2010 it has been featuring a dedicated method (org.pdfclown.tools.PageManager.getSize(Page)) that calculates in memory the actual page data size without the need to write it to a file for trial.
Furthermore, there's another method (org.pdfclown.tools.PageManager.split(long maxDataSize)) purposely implemented to address your kind of scenario which leverages the above-mentioned PageManager.getSize method: it automatically splits a file based on a size limit without creating any intermediate, ugly, stupid, temporary file for trial and error.
You can see a practical example of its use in the org.pdfclown.samples.cli.PageManagementSample (PageDataSizeCalculation and DocumentSplitOnMaximumFileSize cases) included in the downloadable distribution -- here it is an example of console output from the PageDataSizeCalculation case:
Page 1: 29380 (full); 29380 (differential); 29380 (incremental)
Page 2: 30493 (full); 1501 (differential); 30881 (incremental)
Page 3: 21888 (full); 1432 (differential); 32313 (incremental)
Page 4: 33781 (full); 4789 (differential); 37102 (incremental)
. . .
where:
full is the page data size encompassing all its dependencies (like shared resources) -- this is the size of the page when extracted as a single-page document;
differential is the additional page data size -- this is the extra content that's not shared with previous pages;
incremental is the data size of the page sublist encompassing all the previous pages and the current one.
Related
I use this method to copy and scale page by page number from original PDF and put them to generated PDF which contains only selected and scaled pages from original PDF.
private static void addScaledPage(PdfDocument pdf, PdfDocument srcDoc, String pageNumber) throws IOException {
PdfPage page = pdf.addNewPage(PageSize.A4);
PdfCanvas canvas = new PdfCanvas(page);
AffineTransform transformationMatrix = AffineTransform.getScaleInstance(0.86, 0.86);
canvas.concatMatrix(transformationMatrix);
PdfFormXObject pageCopy = srcDoc.getPage(Integer.valueOf(pageNumber)).copyAsFormXObject(pdf);
canvas.addXObject(pageCopy, 50, 30);
}
This code works fine, but small issue happen when I try to take 3 pages from original PDF which have 140 pages and approx. 10 MB size => the generated PDF with 3 selected pages also have approx. 10 MB.
Also, when I try to copy 3 pages or 10 pages from original document I got always the same size of generated PDF => it seems like references are copied from source PDF
I would appreciate to give me some advice, did I do something wrong in the implementation? Or some other advice?
Kindest regards,
It depends a lot on the resources embedded in the document. If a large image that uses CMYK color, or a font with CJK glyphs (either of these resources could easily be several MB in size) is used on the pages you are copying, that entire resource will be copied into the PDF you're creating. The fact that you are only copying three out 140 pages wouldn't make much difference: the bulk of the file size will be taken up by the resource, and the pages won't display properly without it.
A solution would be a workflow that optimizes your document during or after copying the pages. This could convert images to an equivalent, smaller color space, or subset the font so that you only carry the required glyphs. Both of these techniques can substantially reduce the size of the file (but this is all dependent on how the source file itself is constructed, of course).
I have existing / source PDF source document and I copy selected pages from it and generate destination PDF with selected pages. Every page in existing / source document is scanned in different resolution and it varies in size:
generated document with 4 pages => 175 kb
generated document with 4 pages => 923 kb (I suppose this is because of higher scan resolution of each page in source document)
What would be best practice to compress this pages?
Is there any code sample with compressing / reducing size of final PDF which consists of copied pages of source document in different resolution?
Kindest regards
If you are just adding scans to a pdf document, it makes sense for the size of the resulting document to go up if you're using a high resolution image.
Keep in mind that iText is a pdf library. Not an image-manipulation library.
You could of course use regular old java to attempt to compress the images.
public static void writeJPG(BufferedImage bufferedImage, OutputStream outputStream, float quality) throws IOException
{
Iterator<ImageWriter> iterator = ImageIO.getImageWritersByFormatName("jpg");
ImageWriter imageWriter = iterator.next();
ImageWriteParam imageWriteParam = imageWriter.getDefaultWriteParam();
imageWriteParam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
imageWriteParam.setCompressionQuality(quality);
ImageOutputStream imageOutputStream = new MemoryCacheImageOutputStream(outputStream);
imageWriter.setOutput(imageOutputStream);
IIOImage iioimage = new IIOImage(bufferedImage, null, null);
imageWriter.write(null, iioimage, imageWriteParam);
imageOutputStream.flush();
}
But really, putting scanned images into a pdf makes life so much more difficult. Imagine the people who have to handle that document after you. They open it, see text, try to select it, and nothing happens.
Additionaly, you might change the WriterProperties when creating your PdfWriter instance:
PdfWriter writer = new PdfWriter(dest,
new WriterProperties().setFullCompressionMode(true));
Full compression mode will compress certain objects into an object stream, and it will also compress the cross-reference table of the PDF. Since most of the objects in your document will be images (which are already compressed), compressing objects won't have much effect, but if you have a large number of pages, compressing the cross-reference table may result in smaller PDF files.
I have a website that displays products. Each page displays 16 products and there are around 70,000 products on the site. The HTML for each page is generated using PHP.
Product information is stored within a database. Roughly, the first page of results (if I want to show cheapest items first) would be displayed like this (pseudo code only):
// run sql to fetch product titles and image filenames
SELECT itemTitle, itemImageFileName FROM items ORDER BY itemPrice ASC LIMIT 16
// loop through and display items
for (i=0; i<16; i++) {
echo "<p>$itemTitle</p>";
echo "<img src=$itemImageFileName height='100px' width='80px'>";
}
When I do this, the image titles appear first, and then the images are loaded around half a second to one second afterwards. I am wondering how I can accelerate the image loading.
All images are stored in a single folder containing 70,000 images. Nearly all images are less than 50KB in size. Each image filename is of the form: id_width_height.jpg. For example, a filename might be like: 32193_80_100.jpg
I am wondering whether the bottleneck is that it takes the server some time to find the required files because there are 70,000 files in the folder. Is there a way I can accelerate this? Are there any other reasons why images are slow to load?
First of all, I would inspect the image request in a network tab of Chrome/Firefox. That will answer the question of whether the lookup or the download is the critical part.
Do you have a link ?
70.000 is imho too much.. I would split the filename up and create subfolders such as:
/htdocs/img/32193/80_100.jpg
The PDF Spec defines standard structure types, used to define a structure tree for the document. As far as I can see, there is no element related to pages. Here are the standard structure types for grouping elements:
Document
Part
Art
Sect
Div
...and so on...
Why is there no Page item in this list?
If you want your structure to use pages, what should be used? Part? Sect? Div?
PDF tags exist so that the content type / meaning of elements can be identified. They should be considering a kind of "meta" information for the PDF, simply providing context for the content in a file (so that content can be easily extracted, converted, processed, accessible, etc.). Think of it as a table of contents to a book. Just because the book has x pages doesn't mean that the content structure would be altered if the book's page height was cut in half and now had 2x pages in it.
A Page Object in the PDF Document Structure already groups elements (by nature of each element being on a given page), so doing so in this structure would be a little redundant.
Also, consider this case:
Document
Table of Contents (Page 1)
Section 1 (starts on page 2, ends mid page 3)
Sub Section (page 2)
Sub Section (half of page 3)
Section 2 (starts mid page 3)
etc...
In this example, Section 1 and Section 2 couldn't both be direct parents of page 3 (not to mention that Section 1 spans two different pages). Additionally, trying to solve this problem really isn't necessary because the elements which is being grouped here is already each a child of its respective Document Structure's Page node in the actual file format.
Appendix G of the PDF Specification gives examples that demonstrate use of the Page object.
The PDF has a tree structure (which is what allows it to load any page so fast). The content does not have any structure unless you choose to use the marked content feature of the format which then allows metadata to be include in the data.
I am working with JasperReports and iReport tool. One of the requirements the client wants is that the PDF file will be generated to a 100 page document only.
Could you please help me? How can I generated the 100 page PDF document?
As #WEG mentioned in the answer for JasperReport size limit question it can be done with help of this parameters:
net.sf.jasperreports.governor.max.pages.enabled - a flag indicating whether the governor that checks if a report exceeds a specified limit of pages is turned on. With this property enabled, the JR engine will stop the report execution if the number of pages becomes greater than a custom given value;
net.sf.jasperreports.governor.max.pages - if the governor that checks if a report exceeds a specified limit of pages is turned on, this property will indicate the maximum number of pages allowed to be ran, in order to prevent a memory overflow error. If the number of pages in the report becomes greater than this value, the report execution will be stopped;
REPORT_MAX_COUNT - an integer allowing limit the datasource size.
In the iReport you can find a built in variable PAGE_COUNT. For every element in the detail band you can put the following in the "Print when expression" textbox:
Boolean.valueOf($V{PAGE_COUNT}.intValue() < 100)
This will stop printing after page number 100.