Tesseract OCR specify number/location of characters - config

I'm having trouble getting accurate OCR results using Tesseract. I have a series of many small images with time, date, lat and lon text that I'm trying to have Tesseract read. The images are cropped from videos, some with low quality.
The images from each group all have the same format (e.g. time: ## : ## : ##, see image below).
But, Tesseract is giving variable results. For example, an image showing "14:10:08" is read as "142:" (one of the worst results from the data set).
I have trained Tesseract for these images, and that improved the results.
My main question is: is there a way to specify the number of characters Tesseract should read? In this case, something like tesseract time.png time -num_char 8? Or is there a way to specify the width of blobs/boxes Tesseract should implement looking at the picture?
I've experimented with configfiles (as explained on this site), but there's a whole lot of settings and I don't understand many of the explanations (is "Max width of blobs to make rows" what I'm looking for? Didn't seem to help...).
Thanks for any suggestions.

Related

How to convert scanned document images to a PDF document with high compression?

I need to convert scanned document images to a PDF document with high compression. Compression ratio is very important. Can someone recommend any solution on C# for this task?
Best regards, Alexander
There is a free program called PDFBeads that can do it. It requires Ruby, ImageMagick and optionally jbig2enc.
The PDF format itself will probably add next to no overhead in your case. I mean your images will account for most of the output file size.
So, you should compress your images with highest possible compression. For black-and-white images you might get smallest output using FAX4 or JBIG2 compression schemes (both supported in PDF files).
For other images (grayscale, color) either use smallest possible size, lowest resolution and quality, or convert images to black-and-white and use FAX4/JBIG2 compression scheme.
Please note, that most probably you will lose some detail of any image while converting to black-and-white.
If you are looking for a library that can help you with recompression then have a look at Docotic.Pdf library (Disclaimer: I am one of developers of the library).
The Optimize images sample code shows how to recompress images before adding them to PDF. The sample shows how to recompress with JPEG, but for FAX4 the code will be almost the same.

How can I parse a captcha image with data. and data changes

How to parse a captcha Image or get data from it? The data is part of image. The data changes with reloading. How to get the data on the image? can i do anything with data-url of image?
following is a example for captcha:
http://enquiry.indianrail.gov.in/ntes/CaptchaServlet?action=getNewCaptchaImg&t=1400870602238
Using OCR (Optical Character Recognition) is the first step. Below are 2 examples for such tools/APIs that can help you with that.
Try Tesseract.
Tesseract is probably the most accurate open source OCR engine
available. Combined with the Leptonica Image Processing Library it can
read a wide variety of image formats and convert them to text in over
60 languages.
for more info check: https://code.google.com/p/tesseract-ocr/
You can also try OCRopus
OCRopus is an OCR system written in Python, NumPy, and SciPy focusing
on the use of large scale machine learning for addressing problems in
document analysis.
for more info check: https://code.google.com/p/tesseract-ocr/
For detailed info with code smaple on how to do this, check Ben Boyter's article Decoding CAPTCHA’s at: http://www.boyter.org/decoding-captchas/

Reducing the size of pdf generated from software using proprietary fonts

I am trying to bring an Indian Magazine online. This magazine is typed in CorelDraw using the proprietary Devenagari font (http://www.modular-infotech.com/html/shreelipi.html). So these guys have provided a USB dongle that you have to have attached to the machine when you want to access the fonts, and this software has been in use for past 10 years.
To put the magazine online, we've tried to convert it to pdf (by printing). The resultant pdf size is of the order of 30-50MB, even when the pdf does not have even a single image. I am guessing it converts the whole text into an image
It would be really difficult for users to read this magazine given its size. Though when I convert it to .swf format (for add flipbook kind of functionality) - the size reduces to 5-6MB. But there are people who like to download the magazine and then read. I have had no luck reducing the size of pdf.
I have done lot of research on web. The postscript, primo pdf do not help much. The best I could get was 30% reduction using DocuCom pdf printer. But it is still 20MB. I have tried to play with resolution, compression and quality but the best I could get was 18MB.
Ideally I would like to reduce it to less than 2MB.
I would be really grateful if you could help me reduce the size of the pdf! Considering that it has no images, I am hopeful that I can get some really good compression.
The (35MB) magazine can be downloaded from: http://merajhola.in/jin-march.pdf
I can't see any easy way to reduce the size of this PDF. There are no embedded fonts and all the text is drawn using vector graphics primitives. No amount of tweaking the resolution, compression and quality will have a significant improvement.
One possible option would be to embed the font as a subset rather than use vector graphics. That will almost certainly make a big difference, however I doubt the proprietary font license will allow it.
I'm sorry, but this Shree-Lipi thing just sounds wrong in 2012. It would be much better to use proper OpenType fonts with modern (say InDesign) or free (say LuaTeX) software.

BIG header: one jpg or several png?

I've read some of posts here about png/jpg/gif but still I'm quite confused..
I've got a big header on my website :
width:850px height:380px weight:108kb
And it's jpg. A woman + gradient + some layers on top and behing her..
Do you think 108kb it's too much? I was thinking about cut it to png pieces..Would that be a bad idea? What's your suggestions?;) THX for help;]
It depends on the nature of the image, if it's a photograph, JPEG would give the highest quality/compression ratio, if it was pixel stuff like writing or clipart, or have some transparency, then it's GIF/PNG to choose (GIF,PNG8 offers one level transparency, while PNG24 offers a levelled transparency for each pixel).
When I'm confused, I usually save the picture in all three formats and decide what gives the best quality/size deal within the size limits I need, I also try to lower down JPEG quality to the level where the image is still good quality (because that varies from image to another).
Also, if it was a photograph with some writing, split it into a JPEG photograph with a transparent GIF writing overlay (because writing edges look distorted in JPEG).
So, when you are confused, give the three a try and decide, with time you'll gain experience what format suits what content the most.
Actually, 108kb for an image of that size isn't abnormal. It's even more optimal to have it in 1 image: your browser only needs to perform 1 get-request. If you break it up into multiple images, the user needs to wait longer for the site to load.
PNG probably wouldn't help you since JPG is usually much more efficient at handling gradients. PNG would be better if you had big unicolored spaces in your image.

How to give best chance of success to an OCR software?

I am using Tesseract OCR (via pytesser) and PIL (Python Image Library) for automated test of an application.
I am checking that the displayed text is ok by making a screenshot and getting the text thanks to tesseract.
I had some issues in the beginning and it seems to work better since I have increased the size of the screenshot thanks to the bicubic interpolation of PIL.
Unfortunatelly, I still have some mistakes like confusion between '0' and 'O'. I can imagine that I will have other similar issues in the future.
I would like to know if there are some techniques to prepare an image in order to help the OCR. Any idea is welcomed.
Thanks in advance
Shameless plug and disclaimer: my company packages Tesseract for use in .NET
Tesseract is an OK OCR engine. It can miss a lot and gets readily confused by non-text. The best thing you can do for it is to make sure it gets text only. The next best thing is to give it something sanely binarized (adaptive or dynamic threshold to get there) or grayscale and let it try to do binarization.
Train tesseract to recognize your font
Make image extra clean and with enough free space around characters
Profit :)
Here are few real world examples.
First image is original image (croped power meter numbers)
Second image is slightly cleaned up image in GIMP, around 50% OCR accuracy in tesseract
Third image is completely cleaned image - 100% OCR recognized without any training!
Even under the best conditions OCR variants will sneak up on you. Your best option will be to design your tests to be aware of them.
For distinguishing between 0 and O, one simple solution is to choose a font that distinguishes between both (eg: 0 has a dash or dot in its middle). Would that be acceptable in your application?
Another solution is to apply a dictionary-based step after the character-by-character analysis of the text - feeding the recognized text into some form of spell-checker or validator to differentiate between difficult characters.
For instance, a round symbol followed by other numbers is most likely to be a zero, while the same symbol followed by letters is most likely to be a capital o. It's a trivial example, but it shows how context is necessary to make a more reliable OCR system.