How to compress images (png, jpg and so on) using objective C - objective-c

i want to shrink png or jpg on OSX. i only want to shrinkg without affecting the image quality.
like tinypng.org
is there any recommended library? i just know imagemagick. is there a way to do that natively? or another library to shrink/compress images without affecting the image quality?
my aim is to shrink the file size, for example:
logo.png >> 476 k before shrink
logo.png >> 50k after shrink
Edit: to be clear, i want to compress the size of the file, not the image resolution.

TinyPNG.org works by using image quantisation - the similar colours in the image are converted into a HSV or RGB model and then merged depending on the distance.
How does it work?
...
When you upload a PNG (Portable Network Graphics) file, similar colours in your image are combined. This technique is called “quantisation”
...
src: http://tinypng.org
An answer here outlines a method of doing so: https://stackoverflow.com/a/492230/556479.
There are also some answers on this question with refer to how you can do so on Mac OS using objective-c: How do I reduce a bitmap to a known set of RGB colours
See Wikipedia for a more in depth guide: http://en.wikipedia.org/wiki/Color_quantization

Did you have a problem using ImageMagick? It has a rich set of quantize functions such as
bool MagickQuantizeImage( MagickWand mgck_wnd,
float number_colors,
int colorspace_type,
float treedepth,
bool dither,
bool measure_error )
Here is a very thorough guide to quantization using imageMagick

My suggestion is to use http://pngnq.sourceforge.net, it will give better results than ImageMagick and for the single example given in http://tinypng.org, it also produces a very similar output. It is a tiny C implementation of the method present in the paper "Kohonen Neural Networks for Optimal Colour Quantization". That alone is much better since you are no longer relying on closed unknown implementations.
Original (57 KB), tinypng.org (16 KB), pngnq (17 KB):
Using ImageMagick, the best quantization to 256 colors I can get uses the LAB colorspace and dithering by Floyd-Steinberg:
convert input.png -quantize LAB -dither FloydSteinberg -colors 256 output.png
This produces a 16 KB png, but it contains much more visual artifacts:

Related

Resizing multi-page mixed-format PDF with Ghostscript?

I have multi-page PDF-files with mixed formats A4 (portrait) - A0 (landscape).
Is Ghostscript capable of resizing the pages with size >A3 to A3 – but leaving the pages with smaller size (A4) not to be resized?
First, Ghostscript doesn't do manipulations of the input, you should read ghostpdl/doc/vectordevices.htm to see how Ghostscript and the pdfwrite device actually work.
Out of the box, no Ghostscript and the pdfwrite device won't allow you to produce output with differently sized media from the input, and different for each page (you can have it produce output sized to a single media size). It can, of course, be done, but will involve some programming, and in PostScript at that.
You would probably want to look at the pdf_PDF2PS_matrix routine in ghostpdl/Resource/Init/pdf_main.ps:
% Compute the matrix that transforms the PDF->PS "default" user space
/pdf_PDF2PS_matrix { % <pdfpagedict> -- matrix
...
Which calculates the scale factors required when resizing content to fit the media.
Also pdfshowpage_setup :
/pdfshowpage_setpage { % <pagedict> pdfshowpage_setpage <pagedict>
6 dict begin % for setpagedevice
% Stack: pdfpagedict
...
Which is where the selection of the media size takes place.
After spending long time looking for a solution, I found a great - and yet affordable - tool capable of doing the resizing and a lot more: PStill (http://www.pstill.com/)

Animated GIF larger than source images

I'm using imagemagick to create an animated GIF out of ~60 JPG 640x427px photos. The combined size of the JPGs is about 4MB.
However, the output GIF is ~12MB. Is there a reason why the GIF is considerably bigger? Can I conceivably achieve a GIF size of ~4MB?
The command I'm using is:
convert -channel RGB # no improvement in size
-delay 2x10 \
-size 640 \
-loop 0 \
-dispose Background # no improvement in size
-layers Optimize # about 2MB improvement
portrait/*.jpg portrait.gif
Using gifsicle didn't seem to improve either.
JPG is lossy compression.
GIF is lossless compression.
A better comparison would be to convert all the source images to GIF first, then combine them..
First google hit for GIF compression is http://ezgif.com/optimize which claims lossy GIF compresion, might work for you but I offer no warranty as I haven't tried it.
JPEG achieves it's compression through a (lossy) transform, where an 16x16 / 8x8 block of pixels is transformed to frequency representation and then quantized. Instead of selecting e.g. 256 levels (i.e. 8 bits) of red/green/blue per component, JPEG can ignore some frequency components, or use just 1 or 2 bits to represent them.
GIF on the other hand works by identifying repeated patterns from a paletted image (upto 256 entries), which occur exactly in the previously encoded/decoded stream. Both because of the JPEG compression, and the source of the images typically encoded by JPEG (natural full color), the probability of (long) exact matches is quite low.
60 RGB images with the size 640x427 is about 16 million pixels. To represent that much in 4 MB, requires a compression of 2 bits per pixel. To achieve this with GIF would require a very lossy algorithm, that would select (vector) quantization of true color pixels not to the closest pixel in the target GIF palette, but based also on the fact how good dictionary of code words this particular selection will make. The dictionary builds slowly and to achieve 2 bits/pixel, the average length of the decoded code word would have to map to 5.5 matching pixels in the close neighborhood.
By contrast, imagemagick has been able to compress the 16 million pixels (each selected from a palette of 256 elements) to 75% already!

I need detect the approximate location of QR code in scanned image (PDF converted to PNG)

I have many scanned document in PDF.
I use ImageMagick with Ghostscript to convert PDF to PNG in big density. I use convert -density 288 2.pdf 2.png. After that I read the pixels with PHP and find where is QR code and decode it. Because image is very big (~ 2500px), it's need very much RAM. I want, before I read pixels with PHP, to crop the image with ImageMagick and leave only that part with the QR code.
Can I detect the approximate location of QR code with ImageMagick, crop and leave only that part ?
Sample PDF
Converted PNG
Further Update
I see your discussion with Kurt about better extraction of the image from the PDF in the first place, and his recommendation was to use pdfimages. I just wanted to add that you won't find that if you do brew search pdfimages, but you actually need to use
brew install poppler
and then you get the pdfimages executable.
Updated Answer
If you change the tile size to 100x100 on the crop command and run this for the second PDF you supplied:
convert -density 288 pdf2.pdf -crop 100x100 tile%04d.png
and then use the same entropy analysis command
convert -format "%[entropy]:%X%Y:%f\n" tile*.png info: | sort -n
...
...
0.84432:+600+3100:tile0750.png
0.846019:+600+2800:tile0678.png
0.980938:+700+400:tile0103.png
0.984906:+700+500:tile0127.png
0.988808:+600+400:tile0102.png
0.998365:+600+500:tile0126.png
The last 4 listed tiles are
Likewise for the other PDF file you supplied, you get
0.863498:+1900+500:tile0139.png
0.954581:+2000+500:tile0140.png
0.974077:+1900+600:tile0163.png
0.97671:+2000+600:tile0164.png
which means these tiles
I would think that should help you pretty much approximately locate the QR code.
Original Answer
This is not all that scientific, but it may help you get started. The key, I think, is the entropy of the various areas of the image. The QR code has a lot of information encoded in a small area so it should have high entropy. So, I use ImageMagick to split the image into square 400x400 tiles like this:
convert image.png -crop 400x400 tile%03d.png
which gives me 54 tiles. Then I calculate the entropy of each of the tiles and sort them by increasing entropy, also outputting their offsets from the top left of the frame, and their name, like this:
convert -format "%[entropy]:%X%Y:%f\n" tile*.png info: | sort -n
0.00408949:+1200+2800:tile045.png
0.00473755:+1600+2800:tile046.png
0.00944815:+800+2800:tile044.png
0.0142171:+1200+3200:tile051.png
0.0143607:+1600+3200:tile052.png
0.0341039:+400+2800:tile043.png
0.0349564:+800+3200:tile050.png
0.0359226:+800+0:tile002.png
0.0549334:+800+400:tile008.png
0.0556793:+400+3200:tile049.png
0.0589632:+400+0:tile001.png
0.0649078:+1200+0:tile003.png
0.10811:+1200+400:tile009.png
0.116287:+2000+3200:tile053.png
0.120092:+800+800:tile014.png
0.12454:+0+2800:tile042.png
0.125963:+1600+0:tile004.png
0.128795:+800+1200:tile020.png
0.133506:+0+400:tile006.png
0.139894:+1600+400:tile010.png
0.143205:+2000+2800:tile047.png
0.144552:+400+2400:tile037.png
0.153143:+0+0:tile000.png
0.154167:+400+400:tile007.png
0.173786:+0+2400:tile036.png
0.17545:+400+1600:tile025.png
0.193964:+2000+400:tile011.png
0.209993:+0+3200:tile048.png
0.211954:+1200+800:tile015.png
0.215337:+400+2000:tile031.png
0.218159:+800+1600:tile026.png
0.230095:+2000+1200:tile023.png
0.237791:+2000+0:tile005.png
0.239336:+2000+1600:tile029.png
0.24275:+800+2400:tile038.png
0.244751:+0+2000:tile030.png
0.254958:+800+2000:tile032.png
0.271722:+2000+2000:tile035.png
0.275329:+0+1600:tile024.png
0.278992:+2000+800:tile017.png
0.282241:+400+1200:tile019.png
0.285228:+1200+1200:tile021.png
0.290524:+400+800:tile013.png
0.320734:+0+800:tile012.png
0.330168:+1600+2000:tile034.png
0.360795:+1200+2000:tile033.png
0.391519:+0+1200:tile018.png
0.421396:+1200+1600:tile027.png
0.421421:+2000+2400:tile041.png
0.421696:+1600+2400:tile040.png
0.486866:+1600+1600:tile028.png
0.489479:+1600+800:tile016.png
0.611449:+1600+1200:tile022.png
0.674079:+1200+2400:tile039.png
and, hey presto, the last one listed (i.e. the one with the highest entropy) tile039.png is this one.
I have drawn a rectangle around its location using this command
convert image.png -stroke red -fill none -strokewidth 3 -draw "rectangle 1200,2400 1600,2800" a.jpg
I concede there may be luck involved, but I only have one image to test my mad theories. You may need to tile twice, the second time with an x-offset and y-offset of half a tile width, so that you don't cut the QR code and split it across 2 tiles. You may need different size tiles for different size barcodes. You may need to consider the last 3-5 tiles located for your next algorithm. But I think it could form the basis of a method.

How to convert 32 bit PNG to RGB565?

How can I accomplish this? A programmatic solution (Objective-c) is great, but even a non-progarmmatic one is good.
I have pixelmator -> But that doesn't give you the option. I can't seem to do it with Preview either.
I have tried googling, but haven't been able to find a solution so far. The only tool I have been able to use to do this is TexturePacker, but that creates a sprite sheet.
You can use libpng to convert the PNG image to three-byte (8:8:8) RGB. Then you can downsample to the 5:6:5 16-bit color values of RGB565. If r, g, and b are the respective 8-bit colors (stored in an unsigned char type), then the 16-bit RGB565 value is:
((r >> 3) << 11) | ((g >> 2) << 5) | (b >> 3)
You can improve a tad on this by rounding instead of chopping, being careful to not overflow the values. You can also force the green value to be equal to the blue and red values when they are all equal in the original 8-bit values. Otherwise it is possible to have colors that were originally gray inadvertently take on color after conversion.
Create Bitmap Context with color RGB565 using Quartz, paint your PNG on this context, save this bitmap context to file.
PNG does not support a RGB565 packing. You can always apply a posterize to the image (programatically or with ImageMagick or with any image editor), which amounts to discard the lower significant bits in each channel. When saving to PNG, you will still be saving 8 bits per channel (unless you use a palette), but even then you will get an appreciable reduction in size, because of the PNG compression.
A quick example: original:
after a simple posterize with 32 levels (equivalent to a RGB555) applied with XnView
The size goes from 89KB to 47KB, with a small quality loss.
In case of synthetic images with gradients, the quality loss could be much more noticiable (banding).
I received this answer from the creator of texture packer:
you can do it from command line - see
http://www.texturepacker.com/uncategorized/batch-converting-images-to-pvr-or-pvr-ccz/
Just adjust the opt and set output to .png instead of pvr.ccz
Make sure that you do not overwrite your source images.
According to Wikipedia, which is always right, the only 16-bit PNG is a greyscale PNG. http://en.wikipedia.org/wiki/Portable_Network_Graphics
If you just add your 32-bit (alpha) or 24-bit (no alpha) PNG to your project as normal, and then set the texture format in Cocos2D, all should be fine. The code for that is:
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGB565];

Image Magick: Image optimization for websites

I have a camera which produces photographs of 3008x2000 pixels. I use Image Magick to scale and resize the photos to be put up on my website. The size of the images I am using on the website is 602x400. I use this command to reduce the size:
convert DSC_0124.JPG -scale 20% -size 24% img1.jpg
This produces an image which is 602x400 pixels in size. But the file size will be always above 250KB. More images on a single html page means the page will be heavier and loading time will be longer. Are there any features in image magic that will help me to keep the file size as small as possible, possibly, below 100KB. But the image size should be the same, that is, 602x400px. I have achieved similar optimisation with SEAMonster tool for MS Windows. As it doean't have a commandline alternative, it wouldn't be of much help when there are hundreds of images to be converted.
Use command as Delan proposed with additional "-strip" flag to remove EXIF data, this have reduced the size of some of my images drastically. Here is a bash script for unix platforms, but you can use the second part only for individual images.
for X in *.jpg; do convert "$X" -resize 602x400 -strip -quality 86 "$X"; done
This will convert all images in the directory.
Use -quality to set the compression level:
convert DSC_0124.JPG -scale 20% -size 24% -quality [0..100] img1.jpg
You can define the maximum size of the output image at 100KB like this:
convert DSC_0124.JPG -resize 602x400! -strip -define jpeg:extent=100KB img1.jpg
If you are running your website on PHP, you might want to consider the SLIR image resizing script, it does a great job resizing to various constraints (see below) and caches the results.
Parameters:
w Maximum width
h Maximum height
c Crop ratio
q Quality
b Background fill color
p Progressive
http://shiftingpixel.com/2008/03/03/smart-image-resizer/
http://code.google.com/p/smart-lencioni-image-resizer/