Are there different JPEG2000 file formats? - jpeg2000

I've seen JPEG2000 files with both .J2K and .JP2 extensions, and codecs which read one won't always read the other. Can someone explain why there are multiple extensions for what I thought was a single format?

Because JPEG 2000 is both a codec and a file format. The standard is in many parts, with Part 1 giving (mostly) codec information (i.e. how to compress and decompress image data), with a container file format annex (JP2). Part 2 gives many extensions, and a more comprehensive container format (JPX).
JP2 is the "container" format for JPEG 2000 codestreams, and is modelled on the Quicktime format. J2K, I've not seen (we used J2C during standardisation), but I presume it is raw compressed data, without a wrapper. The point of the containers is that a "good" image comes with additional metadata - colour space information, tagging, etc. The JP2 format base allows many "boxes" of information in one file (including many images, if you like). It also allows the same container format to be used for many other parts of the standard (such as JP3D, and JPIP). Really, you shouldn't see many unwrapped, raw data streams - it is, in my opinion, bad practice.

Related

PDF Entropy calculation

Last time mkl helped me a lot, hopefully he (or someone else) can help me with these questions too. Unfortunately I couldn't get access to the ISO norm (ISO 32000-1 or 32000-2).
Are these bytes used for padding? I have tried several files, and they all have padding characters. This is quite remarkable, as I would expect that this substantial amount of low entropy bytes should significantly lower the average entropy of the PDF file. However, this does not seem to be the case, as the average entropy of a PDF file is almost eight bits)
Furthermore, this (meta)data should be part of an object stream, and therefore compressed, but this is not the case (Is there a specific reason for this)..?) (magenta = high entropy/random, how darker the color, how lower the entropy, In generated this image with http://binvis.io/#/)
These are the entropy values ​​of a .doc file (**not **docx), that I converted to a PDF with version 1.4, as this version should not contain object streams etc. However, the entropy values ​​of this file are still quite high. I would think that the entropy of a PDF with version <1.5 would have a lower entropy value on average, as it does not use object streams, but the results are similar to a PDF with version 1.5
I hope somebody can help me with these questions. Thank you.
Added part:
The trailer dictionary has a variable length, and with PDF 1.5 (or higher) it can be part of the central directory stream, not only the length but also the position/offset of the trailer dictionary can vary (although is it.. because it seems that even if the trailer dictionary is part of the central directory stream, it is always at the end of the file?, at least... in all the PDFs I tested). The only thing I don't really understand is that for some reason the researchers of this study assumed that the trailer has a fixed size and a fixed position (the last 164 bytes of a file).
They also mention in Figure 8 that a PDF file encrypted by EasyCrypt, has some structure in both the header and the trailer (which is why it has a lower entropy value compared to a PDF file encrypted with ransomware).
However, when I encrypt a file with EasyCrypt (I tried three different symmetric encryption algorithms: AES 128 bit, AES 256 bit and RC2) and encrypt several PDF files (with different versions), I get a fully encrypted file, without any structure/metadata that is not encrypted (neither in the header nor in the trailer). However, when I encrypt a file with Adobe Acrobat Pro, the structure of the PDF file is preserved. This makes sense, since the PDF extension has its own standardised format for encrypting files, but I don't really understand why they mention that EasyCrypt conforms to this standardised format.
PDF Header encrypted with EasyCrypt:
PDF Header encrypted with Adobe Acrobat Pro:
Unfortunately I couldn't get access to the ISO norm (ISO 32000-1 or 32000-2).
https://opensource.adobe.com/dc-acrobat-sdk-docs/pdfstandards/PDF32000_2008.pdf
Are these bytes used for padding?
Those bytes are part of a metadata stream. The format of the metadata is XMP. According to the XMP spec:
Padding It is recommended that applications allocate 2 KB to 4 KB of padding to the packet. This allows the XMP to be edited in place, and expanded if necessary, without overwriting existing application data. The padding must be XML-compatible whitespace; the recommended practice is to use the space character (U+0020) in the appropriate encoding, with a newline about every 100 characters.
So yes, these bytes are used for padding.
Furthermore, this (meta)data should be part of an object stream, and therefore compressed, but this is not the case (Is there a specific reason for this)..?)
Indeed, there is. The pdf document-wide metadata streams are intended to be readable by applications, too, that don't know the PDF format but do know the XMP format. Thus, these streams should not be compressed or encrypted.
...
I don't see a question in that item.
Added part
the position/offset of the trailer dictionary can vary (although is it.. because it seems that even if the trailer dictionary is part of the central directory stream, it is always at the end of the file?, at least... in all the PDFs I tested)
Well, as the stream in question contains cross reference information for the objects in the PDF, it usually is only finished pretty late in the process of creating the PDF an, therefore, added pretty late to the PDF file. Thus, an end-ish position of it usually is to be expected.
The only thing I don't really understand is that for some reason the researchers of this study assumed that the trailer has a fixed size and a fixed position (the last 164 bytes of a file).
As already discussed, assuming a fixed position or length of the trailer in general is wrong.
If you wonder why they assumed such a fixed size nonetheless, you should ask them.
If I were to guess why they did, I'd assume that their set of 200 PDFs simply was not generic. In the paper they don't mention how they selected those PDFs, so maybe they used a batch they had at their hands without checking how special or how generic it was. If those files were generated by the same PDF creator, chances indeed are that the trailers have a constant (or near constant) length.
If this assumption is correct, i.e. if they worked with a not-generic set of test files only, then their results, in particular their entropy values and confidence intervals and the concluded quality of the approach, are questionable.
They also mention in Figure 8 that a PDF file encrypted by EasyCrypt, has some structure in both the header and the trailer (which is why it has a lower entropy value compared to a PDF file encrypted with ransomware).
However, when I encrypt a file with EasyCrypt (I tried three different symmetric encryption algorithms: AES 128 bit, AES 256 bit and RC2) and encrypt several PDF files (with different versions), I get a fully encrypted file, without any structure/metadata that is not encrypted (neither in the header nor in the trailer).
In the paper they show a hex dump of their file encrypted by EasyCrypt:
Here there is some metadata (albeit not PDF specific) that should show less entropy.
As your EasyCrypt encryption results differ, there appear to be different modes of using EasyCrypt, some of which add this header and some don't. Or maybe EasyCrypt used to add such headers but doesn't anymore.
Either way, this again indicates that the research behind the paper is not generic enough, taking just the output of one encryption tool in one mode (or in one version) as representative example for data encrypted by non-ransomware.
Thus, the results of the article are of very questionable quality.
the PDF extension has its own standardised format for encrypting files, but I don't really understand why they mention that EasyCrypt conforms to this standardised format.
If I haven't missed anything, they merely mention that A constant regularity exists in the header portion of the normally encrypted files, they don't say that this constant regularity does conform to this standardised format.

Searching text inside AFP files

I've been asked to convert files from PDF to AFP and I've managed it using the IBM afp printer's driver. I was wondering if there's a way to search inside the afp file . I know I can do it on the pdf file but I've been asked to crosscheck the converted files searching inside it.
Is there a reason since a pdf file of 370kb is converted to a 11.5Mb afp file ? is it converted as an image ? (this would clarify why I couldn't search inside it)
C is the best option to you to search a string in AFP PTX records. However it depends on how are you converting your PDF to AFP. If you use IBM print dirvers it will rasterize the text. So, you'll be not able to search.
AFP Explorer is one of the best freeware tool if your request is one time.
http://www.compulsivecode.com/project_afpexplorer.aspx
We use COMPART CPMCOPY and CPMILL to convert POS and PDF files into AFP. where you will have MFF filters to get the required output. However it is licensed product.
IBM AFP printer driver can be configured, to some extent. Check this manual page: Creating AFP Resources Using the IBM AFP Printer Drivers for further details.
Make sure that "Print Text as Graphics" is turned off.
Some AFP viewers have the feature of text search within AFP files. Consider BTB Viewer (warning, it looks ridiculously outdated).
If you wish to develop your own solution, consider that in general, searching for text in AFP documents is complicated since each "logical" text block can be split into a series of MO:DCA text instructions, each positioned individually. And it is not for granted that these instructions will be sequential. So expect for problems searching for multi-word strings.
"Conversion" PDF to AFP is a generic term. It depends on what software you used to convert, and what settings were used for conversion. For instance, consider embedded images. Since many AFP devices do not support JPEG compression for I:OCA, the conversion app may convert raster images to raw 24-bit bitmap which is ridiculously ineffective in terms of file size; an innocent background image of 1000×1000 px would take a whopping 3Mb of file size (while the original JPEG stream can be tens kbytes).

Are all PDF files compressed?

So there are some threads here on PDF compression saying that there is some, but not a lot of, gain in compressing PDFs as PDFs are already compressed.
My question is: Is this true for all PDFs including older version of the format?
Also I'm sure its possible for someone (an idiot maybe) to place bitmaps into the PDF rather than JPEG etc. Our company has a lot of PDFs in its DBs (some older formats maybe). We are considering using gzip to compress during transmission but don't know if its worth the hassle
PDFs in general use internal compression for the objects they contain. But this compression is by no means compulsory according to the file format specifications. All (or some) objects may appear completely uncompressed, and they would still make a valid PDF.
There are commandline tools out there which are able to decompress most (if not all) of the internal object streams (even of the most modern versions of PDFs) -- and the new, uncompressed version of the file will render exactly the same on screen or on paper (if printed).
So to answer your question: No, you cannot assume that a gzip compression is adding only hassle and no benefit. You have to test it with a representative sample set of your files. Just gzip them and take note of the time used and of the space saved.
It also depends on the type of PDF producing software which was used...
Instead of applying gzip compression, you would get much better gain by using PDF utilities to apply compression to the contents within the format as well as remove things like unneeded embedded fonts. Such utilities can downsample images and apply the proper image compression, which would be far more effective than gzip. JBIG2 can be applied to bilevel images and is remarkably effective, and JPEG can be applied to natural images with the quality level selected to suit your needs. In Acrobat Pro, you can use Advanced -> PDF Optimizer to see where space is used and selectively attack those consumers. There is also a generic Document -> Reduce File Size to automatically apply these reductions.
Update:
Ika's answer has a link to a PDF optimization utility that can be used from Java. You can look at their sample Java code there. That code lists exactly the things I mentioned:
Remove duplicated fonts, images, ICC profiles, and any other data stream.
Optionally convert high-quality or print-ready PDF files to small, efficient and web-ready PDF.
Optionally down-sample large images to a given resolution.
Optionally compress or recompress PDF images using JBIG2 and JPEG2000 compression formats.
Compress uncompressed streams and remove unused PDF objects.

Is it possible to extract tiff files from PDFs without external libraries?

I was able to use Ned Batchelder's python code, which I converted to C++, to extract jpgs from pdf files. I'm wondering if the same technique can be used to extract tiff files and if so, does anyone know the appropriate offsets and markers to find them?
Thanks,
David
PDF files may contain different image data (not surprisingly).
Most common cases are:
Fax data (CCITT Group 3 and 4)
raw raster data with decoding parameters and optional palette all compressed with Deflate or LZW compression
JPEG data
Recently, I (as developer of a PDF library) start noticing more and more PDFs with JBIG2 image data. Also, JPEG2000 sometimes can be put into a PDF.
I should say, that you probably can extract JPEG/JBIG2/JPEG2000 data into corresponding *.jpeg / *.jp2 / *.jpx files without external libraries but be prepared for all kinds of weird PDFs emitted by broken generators. Also, PDFs quite often use object streams so you'll need to implement sophisticated parser for PDF.
Fax data (i.e. what you probably call TIFF) should be at least packed into a valid TIFF. You can borrow some code for that from open source libtiff for example.
And then comes raw raster data. I don't think that it makes sense to try to extract such data without help of a library. You could do that, of course, but it will take months of work.
So, if you are trying to extract only specific kind of image data from a set of PDFs all created with the same generator, then your task is probably feasible. In all other cases I would recommend to save time, money and hair and use a library for the task.
PDF files store Jpegs as actual JPEGS (DCT and JPX encoding) so in most cases you can rip the data out. With Tiffs, you are looking for CCITT data (but you will need to add a header to the data to make it a Tiff). I wrote 2 blog articles on images in PDF files at http://www.jpedal.org/PDFblog/2010/09/understanding-the-pdf-file-format-images/ and http://www.jpedal.org/PDFblog/2011/07/extract-raw-jpeg-images-from-a-pdf-file/ which might help.

How to optimize PDF file size?

I have an input PDF file (usually, but not always generated by pdfTeX), which I want to convert to an output PDF, which is visually equivalent (no matter the resolution), it has the same metadata (Unicode text info, hyperlinks, outlines etc.), but the file size is as small as possible.
I know about the following methods:
java -cp Multivalent.jar tool.pdf.Compress input.pdf (from http://multivalent.sourceforge.net/). This recompresses all streams, removes unused objects, unifies equivalent objects, compresses whitespace, removes default values, compresses the cross-reference table.
Recompressing suitable images with jbig2 and PNGOUT.
Re-encoding Type1 fonts as CFF fonts.
Unifying equivalent images.
Unifying subsets of the same font to a bigger subset.
Remove fillable forms.
When distilling or otherwise converting (e.g. gs -sDEVICE=pdfwrite), make sure it doesn't degrade image quality, and doesn't increase (!) the image sizes.
I know about the following techniques, but they don't apply in my case, since I already have a PDF:
Use smaller and/or less fonts.
Use vector images instead bitmap images.
Do you have any other ideas how to optimize PDF?
Optimize PDF Files
Avoid Refried Graphics
For graphics that must be inserted as bitmaps, prepare them for maximum compressibility and minimum dimensions. Use the best quality images that you can at the output resolution of the PDF. Inserting compressed JPEGs into PDFs and Distilling them may recompress JPEGs, which can create noticeable artifacts. Use black and white images and text instead of color images to allow the use of the newer JBIG2 standard that excels in monochromatic compression. Be sure to turn off thumbnails when saving PDFs for the Web.
Use Vector Graphics
Use vector-based graphics wherever possible for images that would normally be made into GIFs. Vector images scale perfectly, look marvelous, and their mathematical formulas usually take up less space than bitmapped graphics that describe every pixel (although there are some cases where bitmap graphics are actually smaller than vector graphics). You can also compress vector image data using ZIP compression, which is built into the PDF format. Acrobat Reader version 5 and 6 also support the SVG standard.
Minimize Fonts
How you use fonts, especially in smaller PDFs, can have a significant impact on file size. Minimize the number of fonts you use in your documents to minimize their impact on file size. Each additional fully embedded font can easily take 40K in file size, which is why most authors create "subsetted" fonts that only include the glyphs actually used.
Flatten Fat Forms
Acrobat forms can take up a lot of space in your PDFs. New in Acrobat 8 Pro you can flatten form fields in the Advanced -> PDF Optimizer -> Discard Objects dialog. Flattening forms makes form fields unusable and form data is merged with the page. You can also use PDF Enhancer from Apago to reduce forms by 50% by removing information present in the file but never actually used. You can also combine a refried PDF with the old form pages to create a hybrid PDF in Acrobat (see "Refried PDF" section below).
see article
From PDF specification version 1.5 there are two new methods of compression, object streams and cross reference streams.
You mention that the Multivalent.jar compress tool compresses the cross reference table. This usually means the cross reference table is converted into a stream and then compressed.
The format of this cross reference stream is not fixed. You can change the bit size of the three "columns" of data. It's also possible to pre-process the stream data using a predictor function which will improve the compression level of the data. If you look inside the PDF with a text editor you might be able to find the /Predictor entry in the cross reference stream dictionary to check whether the tool you're using is taking advantage of this feature.
Using a predictor on the compression might be handy for images too.
The second type of compression offered is the use of object streams.
Often in a PDF you have many similar objects. These can now be combined into a single object and then compressed. The documentation for the Multivalent Compress tool mentions that object streams are used but doesn't have many details on the actual choice of which objects to group together. The compression will be better if you group similar objects together into an object stream.