How to optimize a lot of gifs at the same time? - optimization

my question refers to gif optimization.
I've used the main known softwares to achieve gif optimization, but each of them allows to optimize one gif at the same time.
Does someone know how to optimize thousands of gifs at the same time?
Thanks in advance

The OpenSource project, ImageMagick, offers a suite of command-line tools for batch processing images.
Check out the article "ImageMagick v6 Examples -- Animation Optimization" for a how-to on performing some GIF opimizations.

Related

How to avoid a double-up of effort for retina, when using tile maps from Tiled with Cocos2d

I've got retina tile maps working, 15x10 tiles, of 64x64 tiles. problem is for non-retina devices I will need to make a 15x10 tiles of 32x32 tiles. I don't want to recreate the Tile, is it just a case of changing the XML (.tmx) file? Is there an automated tool or another way around this? I've been looking online but not getting too much help.
Thanks
You have to update the TMX file and scale certain attributes. Unless your TMX map is very simple this will be a tedious and error-prone task that's best left to a tool.
There are a variety of TMX rescaling tools out there, but some didn't work for me or simply were incomplete at the time (ie one didn't scale object layers). All the tools I know are generally are written in rather unusual languages (for an iOS developer at least) like Python, Ruby or Bash scripts. Others are only available as binary without the source code.
Check out this cocos2d forum post. Specifically this tool or HDx on the App Store. iTilemaps might also work for you.
Because I wasn't happy with either of the choices, I wrote my own command line tool tmx2scale in Objective-C to rescale TMX maps intelligently in all directions. The tmx2scale tool is not currently available but it will be distributed complete with source code with the KoboldScript Game Kit project.

A batch PNG processor for windows to pass googles page speed test

I have googles page speed plugin installed: http://code.google.com/speed/page-speed/
It is saying that I have a lot of pngs that aren't compressed on my site.
I tried using the RIOT image optimizer: http://luci.criosweb.ro/riot/
However with attempts using multiple settings I couldn't get it to pass.
Any ideas? Thanks!
You could try pngcrush, but presumably you'll get much greater savings from converting to JPEG with quality slightly less than 100 (I usually find 92 pretty good). ImageMagick would be the tool of choice for bulk processing.
I never managed to create paletted PNGs, but in principle those should be pretty efficient when you're dealing with illustrations that only use a few colours.
The good png optimzers are:
pngout http://advsys.net/ken/utils.htm
pngcrush http://pmt.sourceforge.net/pngcrush/
optipng http://optipng.sourceforge.net/
advpng http://advancemame.sourceforge.net/comp-readme.html
For best results use all 4 in that order.
You can also use pngnq http://pngnq.sourceforge.net/ to reduce the image even more at the cost of some quality. (And after using pngnq, run the image through the optimizers.)
Thanks for all the suggestions guys. I think in my case the easiest method is to grab the cache files that google page speed produces. Here is the info: http://code.google.com/speed/page-speed/docs/using_firefox.html#savefiles
Also you'll need to run it in firefox as chrome doesn't produce the same files.

Looking for a lossless compression api similar to smushit

Anyone know of an lossless image compression api/service similar to smushit from yahoo?
From their own FAQ:
WHAT TOOLS DOES SMUSH.IT USE TO SMUSH IMAGES?
We have found many good tools for reducing image size. Often times these tools are specific to particular image formats and work much better in certain circumstances than others. To "smush" really means to try many different image reduction algorithms and figure out which one gives the best result.
These are the algorithms currently in use:
ImageMagick: to identify the image type and to convert GIF files to PNG files.
pngcrush: to strip unneeded chunks from PNGs. We are also experimenting with other PNG reduction tools such as pngout, optipng, pngrewrite. Hopefully these tools will provide improved optimization of PNG files.
jpegtran: to strip all metadata from JPEGs (currently disabled) and try progressive JPEGs.
gifsicle: to optimize GIF animations by stripping repeating pixels in different frames.
More information about the smushing process is available at the Optimize Images section of Best Practices for High Performance Web pages.
It mentions several good tools. By the way, the very same FAQ mentions that Yahoo will make Smush.It a public API sooner or later so that you can run at it your own. Until then you can just upload images separately for Smush.It here.
Try Kraken Image Optimizer: https://kraken.io/signup
The developer's plan is free - but only returns dummy results. You must subscribe to one of the paid plans to use the API, however, the Web Interface is free and unlimited for images of up to 1MB.
Find out more in the Kraken documentation.
See this:
http://github.com/thebeansgroup/smush.py
It's a Python implementation of smushit that can be run off-line to optimise your images without uploading them to Yahoo's service.
As I know the best image compression for me is : Tinypng
They have also API : https://tinypng.com/developers
Once you retrieve your key, you can immediately start shrinking
images. Official client libraries are available for Ruby, PHP,
Node.js, Python and Java. You can also use the WordPress plugin, the
Magento 1 extension or improved Magento 2 extension to compress your
JPEG and PNG images.
And First 500 images per month is for free
Tip : Via using their API, you have no limit about file-size (not max 5MB each as their online tool)

How to give best chance of success to an OCR software?

I am using Tesseract OCR (via pytesser) and PIL (Python Image Library) for automated test of an application.
I am checking that the displayed text is ok by making a screenshot and getting the text thanks to tesseract.
I had some issues in the beginning and it seems to work better since I have increased the size of the screenshot thanks to the bicubic interpolation of PIL.
Unfortunatelly, I still have some mistakes like confusion between '0' and 'O'. I can imagine that I will have other similar issues in the future.
I would like to know if there are some techniques to prepare an image in order to help the OCR. Any idea is welcomed.
Thanks in advance
Shameless plug and disclaimer: my company packages Tesseract for use in .NET
Tesseract is an OK OCR engine. It can miss a lot and gets readily confused by non-text. The best thing you can do for it is to make sure it gets text only. The next best thing is to give it something sanely binarized (adaptive or dynamic threshold to get there) or grayscale and let it try to do binarization.
Train tesseract to recognize your font
Make image extra clean and with enough free space around characters
Profit :)
Here are few real world examples.
First image is original image (croped power meter numbers)
Second image is slightly cleaned up image in GIMP, around 50% OCR accuracy in tesseract
Third image is completely cleaned image - 100% OCR recognized without any training!
Even under the best conditions OCR variants will sneak up on you. Your best option will be to design your tests to be aware of them.
For distinguishing between 0 and O, one simple solution is to choose a font that distinguishes between both (eg: 0 has a dash or dot in its middle). Would that be acceptable in your application?
Another solution is to apply a dictionary-based step after the character-by-character analysis of the text - feeding the recognized text into some form of spell-checker or validator to differentiate between difficult characters.
For instance, a round symbol followed by other numbers is most likely to be a zero, while the same symbol followed by letters is most likely to be a capital o. It's a trivial example, but it shows how context is necessary to make a more reliable OCR system.

Ultra fast drawing in DotNET

Initial tests indicate that GDI+ (writing in VB.NET) is not fast enough for my purposes. My application needs to be able to draw tens of thousands of particles (coloured circles, very preferably anti-aliased) in a full screen resolution at 20+ frames per second.
I'm hesitant to step away from GDI+ since I also require many of the other advanced drawing features (dash patterns, images, text, paths, fills) of GDI+.
Looking for good advice about using OpenGL, DirectX or other platforms to speed up particle rendering from within VB.NET. My app is strictly 2D.
Goodwill,
David
If you want to use VB.NET, then you can go with XNA or SlimDX.
I have some experience in creating games with GDI+ and XNA, and I can understand that GDI+ is giving you trouble.
If I where you I'd check out XNA, it's much faster than GDI+ because it actually uses your video card for drawing and it has a lot of good documentation and examples online.
SlimDX also looks good but I don't have any experience with it. SlimDX is basically the DirectX API for .NET.
The only way to get the speed you need is to move away from software rendering to hardware rendering... and unfortunately that does mean moving to OpenGL or DirectX.
The alternative is to try and optimise your graphics routines to only draw the particles that need to be drawn, not the whole screen/window.
I would agree with JaredPar that you're better off profiling first to determine if your existing codebase can be improved before making a huge switch to a new framework. DirectX is not the easiest framework if you're unfamiliar with it.
The most significant speed increase I found, when writing a game maker with GDI+, was to convert my bitmaps to Format32bppPArgb;-
SuperFastBitmap = ConvertImagePixelFormat(SlowBitmap, Imaging.PixelFormat.Format32bppPArgb)
If they are not in this format already, you'll see the difference immediately when you convert.
It's possible the problem is in your algorithm and not GDI+. Profiling is the only way to know for sure. Without a profile it's very possible you will switch to a new GUI framework and hit the exact same problems.
If you did profile, what part of GDI+ was causing a problem?
As Jared said,
it could be that a significant fraction of your cycles are not going into GDI, and you might be able to reduce those.
A simple way to find those is to halt it at random a few times and examine the stack. The chance that you will catch it in the act of wasting time is equal to the fraction of time being wasted.
Any instruction or call instruction that appears on more than one such sample is something that, if you could replace it, you would see a speedup.
In general, the method is this.
As you're working in VB.net, have you tried using WPF (Part of .net since 3.0)? As WPF is based on DirectX rather than GDI+, that should give you the speed you need, although developing WPF is not straight-forward at all.
Because the GDI+ is not moved by the graphics card, it's slow to render because it uses the CPU to render. At least, you can use DirectX or SlimDX.
(sorry for bad english)
See This: http://msdn.microsoft.com/en-us/library/windows/desktop/ff729480%28v=vs.85%29.aspx
http://www.codeproject.com/Articles/159586/Starting-DirectX-with-Visual-Basic-NET